June 15, 2010

-page 86-

If a hypothesis is borne out by repeated experiments, it becomes a theory~an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler’s theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. Nevertheless, studies succeeded by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.


If other scientists do not have access to scientific results, the research may as well not have had the liberated amounts of time at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.

In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.

Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations~bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is soundly based.

Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected from publication.

The peer~review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other’s work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.

Once a paper has been accepted and published, it becomes part of the vast and ever~expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.

When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940s. McClintock discovered a new phenomenon in corn cells known as ‘transposable genes’, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960s when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than thirty~five years after doing the research.

In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.

The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and diplomat Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.

In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special~interest groups.

Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, as research on how climate change occurs, and what affects change is likely to have on humans and their environment.

Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research.

The 20th century mathematics made rapid advances on all fronts. The foundations of mathematics became more solidly grounded in logic, while at the same time mathematics advanced the development of symbolic logic. Philosophy was not the only field to progress with the help of mathematics. Physics, too, benefited from the contributions of mathematicians to relativity theory and quantum theory. In fact, mathematics achieved broader applications than ever before, as new fields developed within mathematics (computational mathematics, game theory, and chaos theory) and other branches of knowledge, including economics and physics, achieved firmer grounding through the application of mathematics. Even the most abstract mathematics seemed to find application, and the boundaries between pure mathematics and applied mathematics grew ever fuzzier Mathematicians searched for unifying principles and general statements that applied to large categories of numbers and objects. In algebra, the study of structure continued with a focus on structural units called rings, fields, and groups, and at mid~century it extended to the relationships between these categories. Algebra became an important part of other areas of mathematics, including analysis, number theory, and topology, as the search for unifying theories moved ahead. Topology—the studies of the properties of objects that remain constant during transformation, or stretching~became a fertile research field, bringing together geometry, algebra, and analysis. Because of the abstract and complex nature of most 20th~century mathematics, most of the remaining sections of this article will discuss practical developments in mathematics with applications in more familiar fields.

Until the 20th century the centres of mathematics research in the West were all located in Europe. Although the University of Göttingen in Germany, the University of Cambridge in England, the French Academy of Sciences and the University of Paris, and the University of Moscow in Russia retained their importance, the United States rose in prominence and reputation for mathematical research, especially the departments of mathematics at Princeton University and the University of Chicago.

At the Second International Congress of Mathematicians held in Paris in 1900, German mathematician David Hilbert spoke to the assembly. Hilbert was a professor at the University of Göttingen, the former academic home of Gauss and Riemann. Hilbert’s speech at Paris was a survey of twenty~three mathematical problems that he felt would guide the work being done in mathematics during the coming century. These problems stimulated a great deal of the mathematical research of the 20th century, and many of the problems were solved. When news breaks that another ‘Hilbert problem’ has been solved, mathematicians worldwide impatiently await further details.

Hilbert contributed to most areas of mathematics, starting with his classic Grundlagen der Geometric (Foundations of Geometry), published in 1899. Hilbert’s work created the field of functional analysis (the analysis of functions as a group), a field that occupied many mathematicians during the 20th century. He also contributed to mathematical physics. From 1915 on he fought to have Emmy Noether, a noted German mathematician, hired at Göttingen. When the university refused to hire her because of objections to the presence of a woman in the faculty senate, Hilbert countered that the senate was not the changing room for a swimming pool. Noether later made major contributions to ring theory in algebra and wrote a standard text on abstract algebra.

In some ways pure mathematics became more abstract in the 20th century, as it joined forces with the field of symbolic logic in philosophy. The scholars who bridged the fields of mathematics and philosophy early in the century were Alfred North Whitehead and Bertrand Russell, who worked together at Cambridge University. They published their major work, Principia Mathematica (Principles of Mathematics), in three volumes from 1910 to 1913. In it they demonstrated the principles of mathematical logic and attempted to show that all of the mathematics could be deduced from a few premises and definitions by the rules of formal logic. In the late 19th century, German mathematician Gottlob Frége had provided the system of notation for mathematical logic and paved the way for the work of Russell and Whitehead. Mathematical logic influenced the direction of 20th~century mathematics, including the work of Hilbert.

Hilbert proposed that the underlying consistency of all mathematics could be demonstrated within mathematics. Nevertheless, logician Kurt Gödel in Austria proved that the goal of establishing the completeness and consistency of every mathematical theory is impossible. Despite its negative conclusion Gödel’s Theorem, published in 1931, opened new areas in mathematical logic. One area, known as recursion theory, played a major role in the development of computers.

Several revolutionary theories, including relativity and quantum theory, challenged existing assumptions in physics in the early 20th century. The work of a number of mathematicians contributed to these theories. Among them was Noether, whose gender had denied her a paid position at the University of Göttingen. Noether’s mathematical formulations on invariants (quantities that remain unchanged as other quantities change) contributed to Einstein’s theory of relativity. Russian mathematician Hermann Minkowski contributed to relativity the notion of the space~time continuum, with time as a fourth dimension. Hermann Weyl, a student of Hilbert’s, investigated the geometry of relativity and applied group theory to quantum mechanics. Weyl’s investigations helped advance the field of topology. Early in the century Hilbert quipped, ‘Physics is getting too difficult for physicists.’

Hungarian~born American mathematician John von Neumann built a solid mathematical basis for quantum theory with his text Mathematische Grundlagen der Quantenmechanik (1932, Mathematical Foundations of Quantum Mechanics). This investigation led him to explore algebraic operators and groups associated with them, opening a new area now known as Neumann algebra. Von Neumann, however, is probably best known for his work in game theory and computers.

During World War II (1939~1945) mathematicians and physicists worked together on developing radar, the atomic bomb, and other technology that helped defeat the Axis powers. Polish~born mathematician Stanislaw Ulam solved the problem of how to initiate fusion in the hydrogen bomb. Von Neumann participated in numerous US defence projects during the war.

Mathematics plays an important role today in cosmology and astrophysics, especially in research into big bang theory and the properties of black holes, antimatter, elementary particles, and other unobservable objects and events. Stephen Hawking, among the best~known cosmologists of the 20th century, in 1979 was appointed Lucasian Professor of Mathematics at Trinity College, Cambridge, a position once held by Newton.

Mathematics formed an alliance with economics in the 20th century as the tools of mathematical analysis, algebra, probability, and statistics illuminated economic theories. A specialty called econometrics links enormous numbers of equations to form mathematical models for use as forecasting tools.

Game theory began in mathematics but had immediate applications in economics and military strategy. This branch of mathematics deals with situations in which some sort of decision must be made to maximize a profit~that is, too win. Its theoretical foundations were supplied by von Neumann in a series of papers written during the 1930s and 1940s. Von Neumann and economist Oskar Morgenstern published results of their investigations in The Theory of Games and Economic Behaviour (1944). John Nash, the Princeton mathematician profiled in the motion picture A Beautiful Mind, shared the 1994 Nobel Prize in economics for his work in game theory.

Mathematicians, physicists, and engineers contributed to the development of computers and computer science. Nevertheless, the early, theoretical work came from mathematicians. English mathematician Alan Turing, working at Cambridge University, introduced the idea of a machine that could considerably equate of equal value the mathematical operations and solve equations. The Turing machine, as it became known, was a precursor of the modern computer. Through his work Turing brought together the elements that form the basis of computer science: symbolic logic, numerical analysis, electrical engineering, and a mechanical vision of human thought processes.

Computer theory is the third area with which von Neumann is associated, in addition to mathematical physics and game theory. He established the basic principles on which computers operate. Turing and von Neumann both recognized the usefulness of the binary arithmetic system for storing computer programs.

The first large~scale digital computers were pioneered in the 1940s. Von Neumann completed the EDVAC (Electronic Discrete Variable Automatic Computer) at the Institute of Advanced Study in Princeton in 1945. Engineers John Eckert and John Mauchly built ENIAC (Electronic Numerical Integrator and Calculator), which began operation at the University of Pennsylvania in 1946. As increasingly complex computers are built, the field of artificial intelligence has drawn attention. Researchers in this field attempt to develop computer systems that can mimic human thought processes.

Mathematician Norbert Wiener, working at the Massachusetts Institute of Technology (MIT), also became interested in automatic computing and developed the field known as cybernetics. Cybernetics grew out of Wiener’s work on increasing the accuracy of bombsights during World War II. From this came a broader investigation of how information can be translated into improved performance. Cybernetics is now applied to communication and control systems in living organisms.

Computers have exercised an enormous influence on mathematics and its applications. As ever more complex computers are developed, their applications proliferate. Computers have given great impetus to areas of mathematics such as numerical analysis and finite mathematics. Computer science has suggested new areas for mathematical investigation, such as the study of algorithms. Computers also have become powerful tools in areas as diverse as number theory, differential equations, and abstract algebra. In addition, the computer has made possible the solution of several long~standing problems in mathematics, such as the four~colours theorem first proposed in the mid~19th century.

The four~colour theorem stated that four colours are sufficient to colour any map, given that any two countries with a contiguous boundary require different colours. Mathematicians at the University of Illinois finally confirmed the theorem in 1976 by means of a large~scale computer that reduced the number of possible maps too less than 2,000. The program they wrote ran thousands of lines in length and took more than 1,200 hours to run. Many mathematicians, however, do not accept the result as a proof because it has not been checked. Verification by hand would require far too many human hours. Some mathematicians object to the solution’s lack of elegance. This complaint has been paraphrased, ‘a good mathematical proof is like a poem~this are a telephone directory.’

Hilbert inaugurated the 20th century by proposing twenty~three problems that he expected to occupy mathematicians for the next 100 years. A number of these problems, such as the Riemann hypothesis about prime numbers, remain unsolved in the early 21st century. Hilbert claimed, ‘If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?’

The existence of old problems, along with new problems that continually arise, ensures that mathematics research will remain challenging and vital through the 21st century. Influenced by Hilbert, the Clay Mathematics Institute at Harvard University announced the Millennium Prize in 2000 for solutions to mathematics problems that have long resisted solution. Among the seven problems is the Riemann hypothesis. An award of $1 million awaits the mathematician who solves any of these problems.

Minkowski, Hermann (1864~1909), Russian mathematician, who developed the concept of the space~time continuum. He was born in Russia and attended and then taught at German universities. To the three dimensions of space, Minkowski added the concept of a fourth dimension, time. This concept developed from Albert Einstein's 1905 relativity theory, and became, in turn, the framework for Einstein's 1916 general theory of relativity.

Gravitation is one of the four fundamental forces of nature, along with electromagnetism and the weak and strong nuclear forces, which hold together the particles that make up atoms. Gravitation is by far the weakest of these forces and, as a result, is not important in the interactions of atoms and nuclear particles or even of moderate~sized objects, such as people or cars. Gravitation is important only when very large objects, such as planets, are involved. This is true for several reasons. First, the force of gravitation reaches great distances, while nuclear forces operate only over extremely short distances and decrease in strength very rapidly as distance increases. Second, gravitation is always attractive. In contrast, electromagnetic forces between particles can be repulsive or attractive depending on whether the particles both have a positive or negative electrical charge, or they have opposite electrical charges. These attractive and repulsive forces tend to cancel each other out, leaving only a weak net force. Gravitation has no repulsive force and, therefore, no such cancellation or weakening.

After presenting his general theory of relativity in 1915, German~born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved. open sidebar.

Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives whether patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.

The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size.

Falling objects accelerate in response to the force exerted on them by Earth’s gravity. Different objects accelerate at the same rate, regardless of their mass. This illustration shows the speed at which a ball and a cat would be moving and the distance each would have fallen at intervals of a tenth of a second during a short fall

If an object held near the surface of the earth is released, it will fall and accelerate, or pick up speed, as it descends. This acceleration is caused by gravity, the force of attraction between the object and the earth. The force of gravity on an object is also called the object's weight. This force depends on the object's mass, or the amount of matter in the object. The weight of an object is equal to the mass of the object multiplied by the acceleration due to gravity.

A bowling ball that weighs 16 lb. is being pulled toward the earth with a force of 16 lb? In the metric system, the bowling ball is pulled toward the earth with a force of seventy~one newtons (a metric unit of force abbreviated N). The bowling ball also pulls on the earth with a force of 16 lb. (71 N), but the earth is so massive that it does not move appreciably. In order to hold the bowling ball up and keep it from falling, a person must exert an upward force of 16 lb (71 N) on the ball. This upward force acts to oppose the 16 lb. (71 N) downward weight force, leaving a total force of zero. The total force on an object determines the object's acceleration.

If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb. (71 N) bowling ball and a 500 lb. (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m. sec.(thirty~two ft./sec.), resulting in an acceleration of 9.8 m/sec/sec (32 ft. sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.

The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb. (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.

The special theory of relativity dealt only with constant, as opposed to accelerated, motion of the frames of reference, and the Lorentz transformations apply to frames moving with uniform motion with respect to each other. In 1915~1916, Einstein extended relativity to account for the more general case of accelerated frames of reference in his general theory of relativity, the central idea in general relativity theory, which accounts for accelerated motion, is that distinguishing between the effects of gravity is impossible and of nonuniform motion, if we did not know, for example, that we were on a spacecraft accelerating at a constant speed and dropped a cup of coffee, we cold not determined whether the mess on the floor was due to the effects of gravity or the accelerated motion, this inability to distinguish between a nonuniform motion, like an acceleration, and gravity is known as the ‘principle of equivalence’.

In this context, Einstein posited the laws elating space and time measurements carried out by two observers moving uniformly, as of one observer in an accelerating spacecraft and another on Earth. Force fields, like gravity, cause space~like, Einstein concluded, to become warped or curved and hence non~Euclidean in form. In the general theory the motion of material points, including light, is not along straight lines, as in Euclidean space, but along geodesics was confirmed in an experiment performed during a total eclipse of the Sun by Arthur Eddington in 1919.

Here, as in the special theory, visualization may help to understand the situation but does not really describe it. This is nicely illustrated in the typical visual analogy used to illustrate what spatial geodesics base. In this tremendous sheet of paper, extends infinitely in all directions. The inhabitants of this flatland, the flatlanders, are not only aware of the third dimension. Since the world here is perfectly Euclidean, any measurement of the sum of lines, no mater how far expended, would never meet.

We are then asked to move our flatlanders to a new land on the surface of a large sphere. Initially, our relocated population would perceive their new world as identical to the old, or as Euclidean and flat. Next we suppose them to send a kind of laser light along the surface of their two world for thousands of mile s. the discovery is then made that if the two beams of light are sent in parallel directions, the come together after travelling a thousand miles.

After experiencing utter confusion in the face of these results, the flatlaners eventually realize that their world is non~Euclidean or curved and invert Riemannian geometry to describe the curved space. The analogy normally concludes with the suggestion that we are the flatlanders, with the difference being that our story takes place in three, rather than two, dimensions in space. Just as the shadow creatures could not visualize the curved two~dimensional surface of their world, so we cannot visualize a three~dimensional curved space.

Thus a visual analogy to illustrate the reality described by the general theory is useful only to the extent that it entices us into an acceptance of the proposition that the reality is unvisualizable. Yet here, as in the special theory, there is no ambiguity in the mathematical description of this reality. Although curved geodesics are not any more unphysical than straight lines, visualizing the three spatial dimensions as a ‘surface’ in’ in the higher four~dimensional space~time cannot be done. Visualization may help us better understand what is implied by the general theory, but it dies no t disclose what is really meant by the theory,

The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century Bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth~entered, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.

At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid~16th century, Copernicus had proposed a heliocentric, or sun~centred system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.

In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not presumably fit into an earth~centred model of the heavens.

The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.

Gravitational forces because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one~sixth the object’s weight on Earth’s surface. This graph shows how much and object that weigh on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.

To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object be motion at a constant speed on a straight line, and that it takes a force too slow, speed, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.

Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, G is a constant known as the universal constant of gravitation, M and m are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.

According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts similar between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.

Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces overall, however, it became necessary to find the value of ‘G’, the universal constant of gravitation. Scientists needed to re~enact an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured G with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10~11 N~m2/kg2~close to the currently accepted value of 6.670 x 10~11 N~m2/kg2 (a decimal point followed by ten zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 metre from each other, is about sixty~seven millionths of a newton, or about fifteen millionths of a pound.

Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.

Newton's law of gravitation was the first theory to describe the motion of objects on the earth accurately as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.

Frames of Reference, as only a situation can appear different when viewed from different frames of reference. Try to imagine how an observer's perceptions could change from frame to frame in this illustration.

Scientists used Newton's theory of gravitation successfully for many years. Several problems began to arise, however, involving motion that did not follow the law of gravitation or Newtonian mechanics. One problem was the observed and unexplainable deviations in the orbit of Mercury (which could not be caused by the gravitational pull of another orbiting body).

Another problem with Newton's theory involved reference frames, that is, the conditions under which an observer measures the motion of an object. According to Newtonian mechanics, two observers making measurements of the speed of an object will measure different speeds if the observers are moving relative to each other. A person on the ground observing a ball that is on a train passing by will measure the speed of the ball as the same as the speed of the train. A person on the train observing the ball, however, will measure the ball's speed as zero. According to the traditional ideas about space and time, then, there could not be a constant, fundamental speed in the physical world because all speed is relative. However, near the end of the 19th century the Scottish physicist James Clerk Maxwell proposed a complete theory of electric and magnetic forces that contained just such a constant, which he called c. This constant speed was 300,000 km/sec (186,000 mi/sec) and was the speed of electromagnetic waves, including light waves. This feature of Maxwell's theory caused a crisis in physics because it indicated that speed was not always relative.

Albert Einstein resolved this crisis in 1905 with his special theory of relativity. An important feature of Einstein's new theory was that no particle, and even no information, could travel faster than the fundamental speed c. In Newton's gravitation theory, however, information about gravitation moved at infinite speed. If a star exploded into two parts, for example, the change in gravitational pull would be felt immediately by a planet in a distant orbit around the exploded star. According to Einstein's theory, such forces were not possible.

Though Newton's theory contained several flaws, it is still very practical for use in everyday life. Even today, it is sufficiently accurate for dealing with earth~based gravitational effects such as in geology (the study of the formation of the earth and the processes acting on it), and for most scientific work in astronomy. Only when examining exotic phenomena such as black holes (points in space with a gravitational force so strong that not even light can escape them) or in explaining the big bang (the origin of the universe) is Newton's theory inaccurate or inapplicable.

The gravitational attraction of objects for one another is the easiest fundamental force to observe and was the first fundamental force to be described with a complete mathematical theory by the English physicist and mathematician Sir Isaac Newton. A more accurate theory called general relativity was formulated early in the 20th century by the German~born American physicist Albert Einstein. Scientists recognize that even this theory is not correct for describing how gravitation works in certain circumstances, and they continue to search for an improved theory.

Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.

The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size.

If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb (71 N) bowling ball and a 500 lb (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m/sec (32 ft/sec), resulting in an acceleration of 9.8 m/sec/sec (32 ft/sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.

The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.

The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century Bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth~entered, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.

At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid~16th century, Copernicus had proposed a heliocentric, or sun~entered system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.

In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not presumably fit into an earth~centred model of the heavens.

The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.

Gravitational Forces Because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one~sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs ‘w’ on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.

To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object be motion at a constant speed on a straight line, and that it takes a force too slow, speed, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.

Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, ‘G’ is a constant known as the universal constant of gravitation, ‘M’ and ‘m’ are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.

According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts similarly between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.

Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces overall, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to re~enact an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured ‘G’ with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10~11 N~m2/kg2~close to the currently accepted value of 6.670 x 10~11 N~m2/kg2 (a decimal point followed by ten zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 metre from each other, is about sixty~seven millionths of a newton, or about fifteen millionths of a pound.

Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.

Newton's law of gravitation was the first theory to describe the motion of objects on the earth accurately as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.

Einstein's general relativity theory predicts special gravitational conditions. The Big Bang theory, which describes the origin and early expansion of the universe, is one conclusion based on Einstein's theory that has been verified in several independent ways.

Another conclusion suggested by general relativity, as well as other relativistic theories of gravitation, is that gravitational effects move in waves. Astronomers have observed a loss of energy in a pair of neutron stars (stars composed of densely packed neutrons) that are orbiting each other. The astronomers theorize that energy~carrying gravitational waves are radiating from the pair, depleting the stars of their energy. Very violent astrophysical events, such as the explosion of stars or the collision of neutron stars, can produce gravitational waves strong enough that they may eventually be directly detectable with extremely precise instruments. Astrophysicists are designing such instruments with the hope that they will be able to detect gravitational waves by the beginning of the 21st century.

Another gravitational effect predicted by general relativity is the existence of black holes. The idea of a star with a gravitational force so strong that light cannot escape from its surface can be traced to Newtonian theory. Einstein modified this idea in his general theory of relativity. Because light cannot escape from a black hole, for any object~a particle, spacecraft, or wave~to escape, it would have to move past light. Nevertheless, light moves outward at the speed c. According to relativity 'c', is the highest attainable speed, so nothing can pass it. The black holes that Einstein envisioned, then, allow no escape whatsoever. An extension of this argument shows that when gravitation is this strong, nothing can even stay in the same place, but must move inward. Even the surface of a star must move inward, and must continue the collapse that created the strong gravitational force. What remains then is not a star, but a region of space from which emerges a tremendous gravitational force.

Einstein's theory of gravitation revolutionized 20th~century physics. Another important revolution that took place was quantum theory. Quantum theory states that physical interactions, or the exchange of energy, cannot be made arbitrarily small. There is a minimal interaction that comes in a packet called the quantum of an interaction. For electromagnetism the quantum is called the photon. Like the other interactions, gravitation also must be quantized. Physicists call a quantum of gravitational energy a graviton. In principle, gravitational waves arriving at the earth would consist of gravitons. In practice, gravitational waves would consist of apparently continuous streams of gravitons, and individual gravitons could not be detected.

Einstein's theory did not include quantum effects. For most of the 20th century, theoretical physicists have been unsuccessful in their attempts to formulate a theory that resembles Einstein's theory but also includes gravitons. Despite the lack of a complete quantum theory, making some partial predictions about quantized gravitation is possible. In the 1970s, British physicist Stephen Hawking showed that quantum mechanical processes in the strong gravitational pull just outside of black holes would create particles and quanta that move away from the black hole, thereby robbing it of energy.

Astronomy, is the study of the universe and the celestial bodies, gas, and dust within it. Astronomy includes observations and theories about the solar system, the stars, the galaxies, and the general structure of space. Astronomy also includes cosmology, the study of the universe and its past and future. People whom analysis astronomy is called astronomers, and they use a wide variety of methods to achieve of what in finality is obtainably resolved through their research. These methods usually involve ideas of physics, so most astronomers are also astrophysicists, and the terms astronomer and astrophysicist are basically identical. Some areas of astronomy also use techniques of chemistry, geology, and biology.

Astronomy is the oldest science, dating back thousands of years to when primitive people noticed objects in the sky overhead and watched the way the objects moved. In ancient Egypt, he first appearance of certain stars each year marked the onset of the seasonal flood, an important event for agriculture. In 17th~century England, astronomy provided methods of keeping track of time that were especially useful for accurate navigation. Astronomy has a long tradition of practical results, such as our current understanding of the stars, day and night, the seasons, and the phases of the Moon. Much of today's research in astronomy does not address immediate practical problems. Instead, it involves basic research to satisfy our curiosity about the universe and the objects in it. One day such knowledge may be of practical use to humans.

Astronomers use tools such as telescopes, cameras, spectrographs, and computers to analyse the light that astronomical objects emit. Amateur astronomers observe the sky as a hobby, while professional astronomers are paid for their research and usually work for large institutions such as colleges, universities, observatories, and government research institutes. Amateur astronomers make valuable observations, but are often limited by lack of access to the powerful and expensive equipment of professional astronomers.

A wide range of astronomical objects is accessible to amateur astronomers. Many solar system objects~such as planets, moons, and comets~are bright enough to be visible through binoculars and small telescopes. Small telescopes are also sufficient to reveal some of the beautiful detail in nebulas~clouds of gas and dust in our galaxy. Many amateur astronomers observe and photograph these objects. The increasing availability of sophisticated electronic instruments and computers over the past few decades has made powerful equipment more affordable and allowed amateur astronomers to expand their observations too much fainter objects. Amateur astronomers sometimes share their observations by posting their photographs on the World Wide Web, a network of information based on connections between computers.

Amateurs often undertake projects that require numerous observations over days, weeks, months, or even years. By searching the sky over a long period of time, amateur astronomers may observe things in the sky that represent sudden change, such as new comets or novas (stars that brightens suddenly). This type of consistent observation is also useful for studying objects that change slowly over time, such as variable stars and double stars. Amateur astronomers observe meteor showers, sunspots, and groupings of planets and the Moon in the sky. They also participate in expeditions to places in which special astronomical events~such as solar eclipses and meteor showers~are most visible. Several organizations, such as the Astronomical League and the American Association of Variable Star Observers, provide meetings and publications through which amateur astronomers can communicate and share their observations.

Professional astronomers usually have access to powerful telescopes, detectors, and computers. Most work in astronomy includes three parts, or phases. Astronomers first observe astronomical objects by guiding telescopes and instruments to collect the appropriate information. Astronomers then analyse the images and data. After the analysis, they compare their results with existing theories to determine whether their observations match with what theories predict, or whether the theories can be improved. Some astronomers work solely on observation and analysis, and some work solely on developing new theories.

Astronomy is such a broad topic that astronomers specialize in one or more parts of the field. For example, the study of the solar system is a different area of specialization than the study of stars. Astronomers who study our galaxy, the Milky Way, often use techniques different from those used by astronomers who study distant galaxies. Many planetary astronomers, such as scientists who study Mars, may have geology backgrounds and not consider they astronomers at all. Solar astronomers use different telescopes than nighttime astronomers use, because the Sun is so bright. Theoretical astronomers may never use telescopes at all. Instead, these astronomers use existing data or sometimes only previous theoretical results to develop and test theories. An increasing field of astronomy is computational astronomy, in which astronomers use computers to simulate astronomical events. Examples of events for which simulations are useful include the formation of the earliest galaxies of the universe or the explosion of a star to make a supernova.

Astronomers learn about astronomical objects by observing the energy they emit. These objects emit energy in the form of electromagnetic radiation. This radiation travels throughout the universe in the form of waves and can range from gamma rays, which have extremely short wavelengths, to visible light, to radio waves, which are very long. The entire range of these different wavelengths makes up the electromagnetic spectrum.

Astronomers gather different wavelengths of electromagnetic radiation depending on the objects that are being studied. The techniques of astronomy are often very different for studying different wavelengths. Conventional telescopes work only for visible light and the parts of the spectrum near visible light, such as the shortest infrared wavelengths and the longest ultraviolet wavelengths. Earth’s atmosphere complicates studies by absorbing many wavelengths of the electromagnetic spectrum. Gamma~ray astronomy, X~ray astronomy, infrared astronomy, ultraviolet astronomy, radio astronomy, visible~light astronomy, cosmic~ray astronomy, gravitational~wave astronomy, and neutrino astronomy all use different instruments and techniques.

Observational astronomers use telescopes or other instruments to observe the heavens. The astronomers who do the most observing, however, probably spend more time using computers than they do using telescopes. A few nights of observing with a telescope often provide enough data to keep astronomers busy for months analysing the data.

Until the 20th century, all observational astronomers studied the visible light that astronomical objects emit. Such astronomers are called optical astronomers, because they observe the same part of the electromagnetic spectrum that the human eye sees. Optical astronomers use telescopes and imaging equipment to study light from objects. Professional astronomers today hardly ever look through telescopes. Instead, a telescope sends an object’s light to a photographic plate or to an electronic light~sensitive computer chip called a charge~coupled device, or CCD. CCDs are about fifty times more sensitive than film, so today's astronomers can record in a minute an image that would have taken about an hour to record on film.

Telescopes may use either lenses or mirrors to gather visible light, permitting direct observation or photographic recording of distant objects. Those that use lenses are called refracting telescopes, since they use the property of refraction, or bending, of light. The largest refracting telescope is the 40~in (1~m) telescope at the Yerkes Observatory in Williams Bay, Wisconsin, founded in the late 19th century. Lenses bend different colours of light by different amounts, so different colours focus differently. Images produced by large lenses can be tinged with colour, often limiting the observations to those made through filters. Filters limit the image to one colour of light, so the lens bends all of the light in the image the same amount and makes the image more accurate than an image that includes all colours of light. Also, because light must pass through lenses, lenses can only be supported at the very edges. Large, heavy lenses are so thick that all the large telescopes in current use are made with other techniques.

Reflecting telescopes, which use mirrors, are easier to make than refracting telescopes and reflect all colours of light equally. All the largest telescopes today are reflecting telescopes. The largest single telescopes are the Keck telescopes at Mauna Kea Observatory in Hawaii. The Keck telescope mirrors are 394 in (10.0 m) in diameter. Mauna Kea Observatory, at an altitude of 4,205 m (13,796 ft), is especially high. The air at the observatory is clear, so many major telescope projects are located there.

The Hubble Space Telescope (HST), a reflecting telescope that orbits Earth, has returned the clearest images of any optical telescope. The main mirror of the HST is only ninety~four in. (2.4 m.) across, far smaller than that of the largest ground~based reflecting telescopes. Turbulence in the atmosphere makes observing objects as clearly as the HST can see impossible for ground~based telescopes. HST images of visible light are about five times finer than any produced by ground~based telescopes. Giant telescopes on Earth, however, collect much more light than the HST can. Examples of such giant telescopes include the twin 32~ft (10~m) Keck telescopes in Hawaii and the four 26~ft (8~m) telescopes in the Very Large Telescope array in the Atacama Desert in northern Chile (the nearest city is Antofagasta, Chile). Often astronomers use space and ground~based telescopes in conjunction.

Astronomers usually share telescopes. Many institutions with large telescopes accept applications from any astronomer who wishes to use the instruments, though others have limited sets of eligible applicants. The institution then divides the available time between successful applicants and assigns each astronomer an observing period. Astronomers can collect data from telescopes remotely. Data from Earth~based telescopes can be sent electronically over computer networks. Data from space~based telescopes reach Earth through radio waves collected by antennas on the ground.

Gamma rays have the shortest wavelengths. Special telescopes in orbit around Earth, such as the National Aeronautics and Space Administration’s (NASA’s) Compton Gamma~Ray Observatory, gather gamma rays before Earth’s atmosphere absorbs them. X rays, the next shortest wavelengths, also must be observed from space. NASA’s Chandra x~ray Observatory (CXO) is a school~bus~sized spacecraft scheduled to begin studying X~rays from orbit in 1999. It is designed to make high~resolution images.

Ultraviolet light has wavelengths longer than X rays, but shorter than visible light. Ultraviolet telescopes are similar to visible~light telescopes in the way they gather light, but the atmosphere blocks most ultraviolet radiation. Most ultraviolet observations, therefore, must also take place in space. Most of the instruments on the Hubble Space Telescope (HST) are sensitive to ultraviolet radiation. Humans cannot see ultraviolet radiation, but astronomers can create visual images from ultraviolet light by assigning particular colours or shades to different intensities of radiation.

Infrared astronomers study parts of the infrared spectrum, which consists of electromagnetic waves with wavelengths ranging from just longer than visible light to 1,000 times longer than visible light. Earth’s atmosphere absorbs infrared radiation, so astronomers must collect infrared radiation from places where the atmosphere is very thin, or from above the atmosphere. Observatories for these wavelengths are located on certain high mountaintops or in space. Most infrared wavelengths can be observed only from space. Every warm object emits some infrared radiation. Infrared astronomy is useful because objects that are not hot enough to emit visible or ultraviolet radiation may still emit infrared radiation. Infrared radiation also passes through interstellar and intergalactic gas and dusts more easily than radiation with shorter wavelengths. Further, the brightest part of the spectrum from the farthest galaxies in the universe is shifted into the infrared. The Next Generation Space Telescope, which NASA plans to launch in 2006, will operate especially in the infrared.

Radio waves have the longest wavelengths. Radio astronomers use giant dish antennas to collect and focus signals in the radio part of the spectrum. These celestial radio signals, often from hot bodies in space or from objects with strong magnetic fields, come through Earth's atmosphere to the ground. Radio waves penetrate dust clouds, allowing astronomers to see into the centre of our galaxy and into the cocoons of dust that surround forming stars.

Sometimes astronomers study emissions from space that are not electromagnetic radiation. Some of the particles of interest to astronomers are neutrinos, cosmic rays, and gravitational waves. Neutrinos are tiny particles with no electric charge and very little or no mass. The Sun and supernovas emit neutrinos. Most neutrino telescopes consist of huge underground tanks of liquid. These tanks capture a few of the many neutrinos that strike them, while the vast majority of neutrinos pass right through the tanks.

Cosmic rays are electrically charged particles that come to Earth from outer space at almost the speed of light. They are made up of negatively charged particles called electrons and positively charged nuclei of atoms. Astronomers do not know where most cosmic rays come from, but they use cosmic~ray detectors to study the particles. Cosmic~ray detectors are usually grids of wires that produce an electrical signal when a cosmic ray passes close to them.

Gravitational waves are a predicted consequence of the general theory of relativity developed by German~born American physicist Albert Einstein. Set off up in the 1960s astronomers have been building detectors for gravitational waves. Older gravitational~wave detectors were huge instruments that surrounded a carefully measured and positioned massive object suspended from the top of the instrument. Lasers trained on the object were designed to measure the object’s movement, which theoretically would occur when a gravitational wave hit the object. At the end of the 20th century, these instruments had picked up no gravitational waves. Gravitational waves should be very weak, and the instruments were probably not yet sensitive enough to register them. In the 1970s and 1980s American physicists Joseph Taylor and Russell Hulse observed indirect evidence of gravitational waves by studying systems of double pulsars. A new generation of gravitational~wave detectors, developed in the 1990s, used interferometers to measure distortions of space that would be caused by passing gravitational waves.

Some objects emit radiation more strongly in one wavelength than in another, but a set of data across the entire spectrum of electromagnetic radiation is much more useful than observations in anyone wavelength. For example, the supernova remnant known as the Crab Nebula has been observed in every part of the spectrum, and astronomers have used all the discoveries together to make a complete picture of how the Crab Nebula is evolving.

Whether astronomers take data from a ground~based telescope or have data radioed to them from space, they must then analyse the data. Usually the data are handled with the aid of a computer, which can carry out various manipulations the astronomer requests. For example, some of the individual picture elements, or pixels, of a CCD may be more sensitive than others. Consequently, astronomers sometimes take images of blank sky to measure which pixels appear brighter. They can then take these variations into account when interpreting the actual celestial images. Astronomers may write their own computer programs to analyse data or, as is increasingly the case, use certain standard computer programs developed at national observatories or elsewhere.

Often an astronomer uses observations to test a specific theory. Sometimes, a new experimental capability allows astronomers to study a new part of the electromagnetic spectrum or to see objects in greater detail or through special filters. If the observations do not verify the predictions of a theory, the theory must be discarded or, if possible, modified.

Up to about 3,000 stars are visible at a time from Earth with the unaided eye, far away from city lights, on a clear night. A view at night may also show several planets and perhaps a comet or a meteor shower. Increasingly, human~made light pollution is making the sky less dark, limiting the number of visible astronomical objects. During the daytime the Sun shines brightly. The Moon and bright planets are sometimes visible early or late in the day but are rarely seen at midday.

Earth moves in two basic ways: It turns in place, and it revolves around the Sun. Earth turns around its axis, an imaginary line that runs down its centre through its North and South poles. The Moon also revolves around Earth. All of these motions produce day and night, the seasons, the phases of the Moon, and solar and lunar eclipses.

Earth is about 12,000 km. (about 7,000 mi.) in diameter. As it revolves, or moves in a circle, around the Sun, Earth spins on its axis. This spinning movement is called rotation. Earth’s axis is tilted 23.5° with respect to the plane of its orbit. Each time Earth rotates on its axis, its corrective velocity to enable it of travelling, or free falling through into a new day, in other words, its rotational inertia or axial momentum carries it through one day, a cycle of light and dark. Humans artificially divide the day into 24 hours and then divide the hours into 60 minutes and the minutes into 60 seconds.

Earth revolves around the Sun once every year, or 365.25 days (most people use a 365~day calendar and take care of the extra 0.25 day by adding a day to the calendar every four years, creating a leap year). The orbit of Earth is almost, but not quite, a circle, so Earth is sometimes a little closer to the Sun than at other times. If Earth were upright as it revolved around the Sun, each point on Earth would have exactly twelve hours of light and twelve hours of dark each day. Because Earth is tilted, however, the northern hemisphere sometimes points toward the Sun and sometimes points away from the Sun. This tilt is responsible for the seasons. When the northern hemisphere points toward the Sun, the northernmost regions of Earth see the Sun 24 hours a day. The whole northern hemisphere gets more sunlight and gets it at a more direct angle than the southern hemisphere does during this period, which lasts for half of the year. The second half of this period, when the northern hemisphere points most directly at the Sun, is the northern hemisphere's summer, which corresponds to winter in the southern hemisphere. During the other half of the year, the southern hemisphere points more directly toward the Sun, so it is spring and summer in the southern hemisphere and fall and winters in the northern hemisphere.

One revolution of the Moon around Earth takes a little more than twenty~seven days seven hours. The Moon rotates on its axis in this same period of time, so the same face of the Moon is always presented to Earth. Over a period a little longer than twenty~nine days twelve hours, the Moon goes through a series of phases, in which the amount of the lighted half of the Moon we see from Earth changes. These phases are caused by the changing angle of sunlight hitting the Moon. (The period of phases is longer than the period of revolution of the Moon, because the motion of Earth around the Sun changes the angle at which the Sun’s light hits the Moon from night to night.)

The Moon’s orbit around Earth is tilted five from the plane of Earth’s orbit. Because of this tilt, when the Moon is at the point in its orbit when it is between Earth and the Sun, the Moon is usually a little above or below the Sun. At that time, the Sun lights the side of the Moon facing away from Earth, and the side of the Moon facing toward Earth is dark. This point in the Moon’s orbit corresponds to a phase of the Moon called the new moon. A quarter moon occurs when the Moon is at right angles to the line formed by the Sun and Earth. The Sun lights the side of the Moon closest to it, and half of that side is visible from Earth, forming a bright half~circle. When the Moon is on the opposite side of Earth from the Sun, the face of the Moon visible from Earth is lit, showing the full moon in the sky

Because of the tilt of the Moon's orbit, the Moon usually passes above or below the Sun at new moon and above or below Earth's shadow at full moon. Sometimes, though, the full moon or new moon crosses the plane of Earth's orbit. By a coincidence of nature, even though the Moon is about 400 times smaller than the Sun, it is also about 400 times closer to Earth than the Sun is, so the Moon and Sun look almost the same size from Earth. If the Moon lines up with the Sun and Earth at new moon (when the Moon is between Earth and the Sun), it blocks the Sun’s light from Earth, creating a solar eclipse. If the Moon lines up with Earth and the Sun at the full moon (when Earth is between the Moon and the Sun), Earth’s shadow covers the Moon, making a lunar eclipse.

A total solar eclipse is visible from only a small region of Earth. During a solar eclipse, the complete shadow of the Moon that falls on Earth is only about 160 km. (about 100 mi.) wide. As Earth, the Sun, and the Moon move, however, the Moon’s shadow sweeps out a path up to 16,000 km. (10,000 mi.) long. The total eclipse can only be seen from within this path. A total solar eclipse occurs about every eighteen months. Off to the sides of the path of a total eclipse, a partial eclipse, in which the Sun is only partly covered, is visible. Partial eclipses are much less dramatic than total eclipses. The Moon’s orbit around Earth is elliptical, or egg~shaped. The distance between Earth and the Moon varies slightly as the Moon orbits Earth. When the Moon is farther from Earth than usual, it appears smaller and may not cover the entire Sun during an eclipse. A ring, or annulus, of sunlight remains assimilated through visibility. Making an annular eclipse. An annular solar eclipse also occurs about every eighteen months. Additional partial solar eclipses are also visible from Earth in between.

At a lunar eclipse, the Moon is existent in Earth's shadow. When the Moon is completely in the shadow, the total lunar eclipse is visible from everywhere on the half of Earth from which the Moon is visible at that time. As a result, more people see total lunar eclipses than see total solar eclipses.

In an open place on a clear dark night, streaks of light may appear in a random part of the sky about once every ten minutes. These streaks are meteors~bits of rock~turning up in Earth's atmosphere. The bits of rock are called meteoroids, and when these bits survive Earth’s atmosphere intact and land on Earth, they are known as meteorites.

Every month or so, Earth passes through the orbit of a comet. Dust from the comet remains in the comet's orbit. When Earth passes through the band of dust, the dust and bits of rock burn up in the atmosphere, creating a meteor shower. Many more meteors are visible during a meteor shower than on an ordinary night. The most observed meteor shower is the Perseid shower, which occurs each year on August 11th or 12th.

Humans have picked out landmarks in the sky and mapped the heavens for thousands of years. Maps of the sky helped to potentially lost craft in as much as sailors have navigated using the celestially fixed stars to find refuge away from being lost. Now astronomers methodically map the sky to produce a universal format for the addresses of stars, galaxies, and other objects of interest.

Some of the stars in the sky are brighter and more noticeable than others are, and some of these bright stars appear to the eye to be grouped together. Ancient civilizations imagined that groups of stars represented figures in the sky. The oldest known representations of these groups of stars, called constellations, are from ancient Sumer (now Iraq) from about 4000 Bc. The constellations recorded by ancient Greeks and Chinese resemble the Sumerian constellations. The northern hemisphere constellations that astronomers recognize today are based on the Greek constellations. Explorers and astronomers developed and recorded the official constellations of the southern hemisphere in the 16th and 17th centuries. The International Astronomical Union (IAU) officially recognizes eighty~eight constellations. The IAU defined the boundaries of each constellation, so the eighty~eight constellations divide the sky without overlapping.

A familiar group of stars in the northern hemisphere is called the Big Dipper. The Big Dipper is part of an official constellation~Ursa Major, or the Great Bear. Groups of stars that are not official constellations, such as the Big Dipper, are called asterisms. While the stars in the Big Dipper appear in approximately the same part of the sky, they vary greatly in their distance from Earth. This is true for the stars in all constellations or asterisms~the stars accumulating of the group do not really occur close to each other in space, they merely appear together as seen from Earth. The patterns of the constellations are figments of humans’ imagination, and different artists may connect the stars of a constellation in different ways, even when illustrating the same myth.

Astronomers use coordinate systems to label the positions of objects in the sky, just as geographers use longitude and latitude to label the positions of objects on Earth. Astronomers use several different coordinate systems. The two most widely used are the altazimuth system and the equatorial system. The altazimuth system gives an object’s coordinates with respect to the sky visible above the observer. The equatorial coordinate system designates an object’s location with respect to Earth’s entire night sky, or the celestial sphere.

One of the ways astronomers give the position of a celestial object is by specifying its altitude and its azimuth. This coordinate system is called the altazimuth system. The altitude of an object is equal to its angle, in degrees, above the horizon. An object at the horizon would have an altitude of zero, and an object directly overhead would have an altitude of ninety. The azimuth of an object is equal to its angle in the horizontal direction, with north at zero, east at ninety, south at 180°, and west at 270°. For example, if an astronomer were looking for an object at twenty~three altitude and eighty~seven azimuth, the astronomer would know to look low in the sky and almost directly east.

As Earth rotates, astronomical objects appear to rise and set, so their altitudes and azimuths are constantly changing. An object’s altitude and azimuth also vary according to an observer’s location on Earth. Therefore, astronomers almost never use altazimuth coordinates to record an object’s position. Instead, astronomers with altazimuth telescopes translate coordinates from equatorial coordinates to find an object. Telescopes that use an altazimuth mounting system may be simple to set up, but they require many calculated movements to keep them pointed at an object as it moves across the sky. These telescopes fell out of use with the development of the equatorial coordinate and mounting system in the early 1800s. However, computers have made the return to popularity possible for altazimuth systems. Altazimuth mounting systems are simple and inexpensive, and~with computers to do the required calculations and control the motor that moves the telescope~they are practical.

The equatorial coordinate system is a coordinate system fixed on the sky. In this system, a star keeps the same coordinates no matter what the time is or where the observer is located. The equatorial coordinate system is based on the celestial sphere. The celestial sphere is a giant imaginary globe surrounding Earth. This sphere has north and south celestial pole directly above Earth’s North and South poles. It has a celestial equator, directly above Earth’s equator. Another important part of the celestial sphere is the line that marks the movement of the Sun with respect to the stars throughout the year. This path is called the ecliptic. Because Earth is tilted with respect to its orbit around the Sun, the ecliptic is not the same as the celestial equator. The ecliptic is tilted 23.5° to the celestial equator and crosses the celestial equator at two points on opposite sides of the celestial sphere. The crossing points are called the vernal (or spring) equinox and the autumnal equinox. The vernal equinox and autumnal equinox mark the beginning of spring and fall, respectively. The points at which the ecliptic and celestial equator are farthest apart are called the summer solstice and the winter solstice, which mark the beginning of summer and winter, respectively.

As Earth rotates on its axis each day, the stars and other distant astronomical objects appear to rise in the eastern part of the sky and set in the west. They seem to travel in circles around Earth’s North or South poles. In the equatorial coordinate system, the celestial sphere turns with the stars (but this movement is really caused by the rotation of Earth). The celestial sphere makes one complete rotation every twenty~three hours fifty~six minutes, which is four unexpected moments than a day measured by the movement of the Sun. A complete rotation of the celestial sphere is called a sidereal day. Because the sidereal day is shorter than a solar day, the stars that an observer sees from any location on Earth change slightly from night to night. The difference between a sidereal day and a solar day occurs because of Earth’s motion around the Sun.

The equivalent of longitude on the celestial sphere is called right ascension and the equivalent of latitude is declination. Specifying the right ascension of a star is equivalent to measuring the east~west distance from a line called the prime meridian that runs through Greenwich, England, for a place on Earth. Right ascension starts at the vernal equinox. Longitude on Earth is given in degrees, but right ascension is given in units of time~hours, minutes, and seconds. This is because the celestial equator is divided into 24 equal parts~each called an hour of right ascension instead of fifteen. Each hour is made up of 60 minutes, each of which is equal to 60 seconds. Measuring right ascension in units of time makes determine when will be the best time for observing an object easier for astronomers. A particular line of right ascension will be at its highest point in the sky above a particular place on Earth four minutes earlier each day, so keeping track of the movement of the celestial sphere with an ordinary clock would be complicated. Astronomers have special clocks that keep sidereal time (24 sidereal hours are equal to twenty~three hours fifty~six minutes of familiar solar time). Astronomers compare the current sidereal time with the right ascension of the object they wish to view. The object will be highest in the sky when the sidereal time equals the right ascension of the object.

The direction perpendicular to right ascension~and the equivalent to latitude on Earth~is declination. Declination is measured in degrees. These degrees are divided into arcminutes and arcseconds. One arcminute is equal to 1/60 of a degree, and one arcsecond is equal to 1/60 of an arcminute, or 1/360 of a degree. The celestial equator is at declination zero, the north celestial pole is at declination ninety, and the south celestial pole has a declination of ~90°. Each star has a right ascension and a declination that mark its position in the sky. The brightest star, Sirius, for example, has right ascension six hours forty~five minutes (abbreviated as 6h. 45m.) and declination~16 degrees forty~three arcminutes

Stars are so far away from Earth that the main star motion we see results from Earth’s rotation. Stars do move in space, however, and these proper motions slightly change the coordinates of the nearest stars over time. The effects of the Sun and the Moon on Earth also cause slight changes in Earth’s axis of rotation. These changes, called precession, cause a slow drift in right ascension and declination. To account for precession, astronomers redefine the celestial coordinates every fifty years or so.

Solar systems, both our own and those located around other stars, are a major area of research for astronomers. A solar system consists of a central star orbited by planets or smaller rocky bodies. The gravitational force of the star holds the system together. In our solar system, the central star is the Sun. It holds all the planets, including Earth, in their orbits and provides light and energy necessary for life. Our solar system is just one of many. Astronomers are just beginning to be able to study other solar systems.

Our solar system contains the Sun, nine planets (of which Earth is third from the Sun), and the planets’ satellites. It also contains asteroids, comets, and interplanetary dust and gases.

Until the end of the 18th century, humans knew of five planets~Mercury, Venus, Mars, Jupiter, and Saturn~in addition to Earth. When viewed without a telescope, planets appear to be dots of light in the sky. They shine steadily, while stars seem to twinkle. Twinkling results from turbulence in Earth's atmosphere. Stars are so far away that they appear as tiny points of light. A moment of turbulence can change that light for a fraction of a second. Even though they look the same size as stars to unaided human eyes, planets are close enough that they take up more space in the sky than stars do. The disks of planets are big enough to average out variations in light caused by turbulence and therefore do not twinkle.

Between 1781 and 1930, astronomers found three more planets~Uranus, Neptune, and Pluto. This brought the total number of planets in our solar system to nine. In order of increasing distance from the Sun, the planets in our solar system are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto.

Astronomers call the inner planets~Mercury, Venus, Earth, and Mars~the terrestrial planets. Terrestrial (from the Latin word terra, meaning ‘Earth’) planets are Earthlike in that they have solid, rocky surfaces. The next group of planets~Jupiter, Saturn, Uranus, and Neptune~is called the Jovian planets, or the giant planets. The word Jovian has the same Latin root as the word Jupiter. Astronomers call these planets the Jovian planets because they resemble Jupiter in that they are giant, massive planets made almost entirely of gas. The mass of Jupiter, for example, is 318 times the mass of Earth. The Jovian planets have no solid surfaces, although they probably have rocky cores several times more massive than Earth. Rings of chunks of ice and rock surround each of the Jovian planets. The rings around Saturn are the most familiar.

Pluto, the outermost planet, is tiny, with a mass about one five~hundredth the mass of Earth. Pluto seems out of place, with its tiny, solid body out beyond the giant planets. Many astronomers believe that Pluto is really just the largest, or one of the largest, of a group of icy objects in the outer solar system. These objects orbit in a part of the solar system called the Kuiper Belt. Even if astronomers decide that Pluto belongs to the Kuiper Belt objects, it will probably still be called a planet for historical reasons.

Most of the planets have moons, or satellites. Earth's Moon has a diameter about one~fourth the diameter of Earth. Mars has two tiny chunks of rock, Phobos and Deimos, each only about 10 km (about 6 mi) across. Jupiter has at least seventeen satellites. The largest four, known as the Galilean satellites, are Io, Europa, Ganymede, and Callisto. Ganymede is even larger than the planet Mercury. Saturn has at least eighteen satellites. Saturn’s largest moon, Titan, is also larger than the planet Mercury and is enshrouded by a thick, opaque, smoggy atmosphere. Uranus has at least seventeen moons, and Neptune has at least eight moons. Pluto had one moon, called Charon. Charon is more than half as big as Pluto.

Comets and asteroids are rocky and icy bodies that are smaller than planets. The distinction between comets, asteroids, and other small bodies in the solar system is a little fuzzy, but generally a comet is icier than an asteroid and has a more elongated orbit. The orbit of a comet takes it close to the Sun, then back into the outer solar system. When comets near the Sun, some of their ice turns from solid material into gas, releasing some of their dust. Comets have long tails of glowing gas and dust when they are near the Sun. Asteroids are rockier bodies and usually have orbits that keep them at always about the same distance from the Sun.

Both comets and asteroids have their origins in the early solar system. While the solar system was forming, many small, rocky objects called planetesimals condensed from the gas and dust of the early solar system. Millions of planetesimals remain in orbit around the Sun. A large spherical cloud of such objects out beyond Pluto forms the Oort cloud. The objects in the Oort cloud are considered comets. When our solar system passes close to another star or drifts closer than usual to the centre of our galaxy, the change in gravitational pull may disturb the orbit of one of the icy comets in the Oort cloud. As this comet falls toward the Sun, the ice turns into vapour, freeing dust from the object. The gas and dust form the tail or tails of the comet. The gravitational pull of large planets such as Jupiter or Saturn may swerve the comet into an orbit closer to the Sun. The time needed for a comet to make a complete orbit around the Sun is called the comet’s period. Astronomers believe that comets with periods longer than about 200 years come from the Oort Cloud. Short~period comets, those with periods less than about 200 years, probably come from the Kuiper Belt, a ring of planetesimals beyond Neptune. The material in comets is probably from the very early solar system, so astronomers study comets to find out more about our solar system’s formation.

When the solar system was forming, some of the planetesimals came together more toward the centre of the solar system. Gravitational forces from the giant planet Jupiter prevented these planetesimals from forming full~fledged planets. Instead, the planetesimals broke up to create thousands of minor planets, or asteroids, that orbit the Sun. Most of them are in the asteroid belt, between the orbits of Mars and Jupiter, but thousands are in orbits that come closer to Earth or even cross Earth's orbit. Scientists are increasingly aware of potential catastrophes if any of the largest of these asteroids hits Earth. Perhaps 2,000 asteroids larger than 1 km. (0.6 mi.) in diameter are potential hazards.

The Sun is the nearest star to Earth and is the centre of the solar system. It is only eight light~minutes away from Earth, meaning light takes only eight minutes to travel from the Sun to Earth. The next nearest star is four light~years away, so light from this star, Proxima Centauri (part of the triple star Alpha Centauri), takes four years to reach Earth. The Sun's closeness means that the light and other energy we get from the Sun dominate Earth’s environment and life. The Sun also provides a way for astronomers to study stars. They can see details and layers of the Sun that are impossible to see on more distant stars. In addition, the Sun provides a laboratory for studying hot gases held in place by magnetic fields. Scientists would like to create similar conditions (hot gases contained by magnetic fields) on Earth. Creating such environments could be useful for studying basic physics.

The Sun produces its energy by fusing hydrogen into helium in a process called nuclear fusion. In nuclear fusion, two atoms merge to form a heavier atom and release energy. The Sun and stars of similar mass start off with enough hydrogen to shine for about ten billion years. The Sun is less than halfway through its lifetime.

Although most telescopes are used mainly to collect the light of faint objects so that they can be studied, telescopes for planetary and other solar system studies are also used to magnify images. Astronomers use some of the observing time of several important telescopes for planetary studies. Overall, planetary astronomers must apply and compete for observing time on telescopes with astronomers seeking to study other objects. Some planetary objects can be studied as they pass in front of, or occult, distant stars. The atmosphere of Neptune's moon Triton and the shapes of asteroids can be investigated in this way, for example. The fields of radio and infrared astronomy are useful for measuring the temperatures of planets and satellites. Ultraviolet astronomy can help astronomers study the magnetic fields of planets.

During the space age, scientists have developed telescopes and other devices, such as instruments to measure magnetic fields or space dust, that can leave Earth's surface and travel close to other objects in the solar system. Robotic spacecraft have visited all of the planets in the solar system except Pluto. Some missions have targeted specific planets and spent much time studying a single planet, and some spacecraft have flown past a number of planets.

Astronomers use different telescopes to study the Sun than they use for nighttime studies because of the extreme brightness of the Sun. Telescopes in space, such as the Solar and Heliospheric Observatory (SOHO) and the Transition Region and Coronal Explorer (TRACE), are able to study the Sun in regions of the spectrum other than visible light. X~rays, ultraviolet, and radio waves from the Sun are especially interesting to astronomers. Studies in various parts of the spectrum give insight into giant flows of gas in the Sun, into how the Sun's energy leaves the Sun to travel to Earth, and into what the interior of the Sun is like. Astronomers also study solar~terrestrial relations~the relation of activity on the Sun with magnetic storms and other effects on Earth. Some of these storms and effects can affect radio reception, cause electrical blackouts, or damage satellites in orbit.

Our solar system began forming about five billion years ago, when a cloud of gas and dust between the stars in our Milky Way Galaxy began contracting. A nearby supernova~an exploding star~may have started the contraction, but most astronomers believe a random change in density in the cloud caused the contraction. Once the cloud~known as the solar nebula~began to contract, the contraction occurred faster and faster. The gravitational energy caused by this contraction heated the solar nebula. As the cloud became smaller, it began to spin faster, much as a spinning skater will spin faster by pulling in his or her arms. This spin kept the nebula from forming a sphere; instead, it settled into a disk of gas and dust.

In this disk, small regions of gas and dust began to draw closer and stick together. The objects that resulted, which were usually less than 500 km (300 mi) across, are the planetesimals. Eventually, some planetesimals stuck together and grew to form the planets. Scientists have made computer models of how they believe the early solar system behaved. The models show that for a solar system to produce one or two huge planets like Jupiter and several other, much smaller planets is usual.

The largest region of gas and dust wound up in the centre of the nebula and formed the protosun (proto is Greek for ‘before’ and is used to distinguish between an object and its forerunner). The increasing temperature and pressure in the middle of the protosun vaporized the dust and eventually allowed nuclear fusion to begin, marking the formation of the Sun. The young Sun gave off a strong solar wind that drove off most of the lighter elements, such as hydrogen and helium, from the inner planets. The inner planets then solidified and formed rocky surfaces. The solar wind lost strength. Jupiter’s gravitational pull was strong enough to keep its shroud of hydrogen and helium gas. Saturn, Uranus, and Neptune also kept their layers of light gases.

The theory of solar system formation described above accounts for the appearance of the solar system as we know it. Examples of this appearance include the fact that the planets all orbit the Sun in the same direction and that almost all the planets rotate on their axes in the same direction. The recent discoveries of distant solar systems with different properties could lead to modifications in the theory, however

Studies in the visible, the infrared, and the shortest radio wavelengths have revealed disks around several young stars in our galaxy. One such object, Beta Pictoris (about sixty~two light~years from Earth), has revealed a warp in the disk that could be a sign of planets in orbit. Astronomers are hopeful that, in the cases of these young stars, they are studying the early stages of solar system formation.

Although astronomers have long assumed that many other stars have planets, they have been unable to detect these other solar systems until recently. Planets orbiting around stars other than the Sun are called extra solar planets. Planets are small and dim compared with stars, so they are lost in the glare of their parent stars and are invisible to direct observation with telescopes.

Astronomers have tried to detect other solar systems by searching for the way a planet affects the movement of its parent star. The gravitational attraction between a planet and its star pulls the star slightly toward the planet, so the star wobbles slightly as the planet orbits it. Throughout the mind and late 1900s, several observatories tried to detect wobbles in the nearest stars by watching the stars’ movement across the sky. Wobbles were reported in several stars, but later observations showed that the results were false.

In the early 1990s, studies of a pulsar revealed at least two planets orbiting it. Pulsars are compact stars that give off pulses of radio waves at very regular intervals. The pulsar, designated PSR 1257+12, is about 1,000 light~years from Earth. This pulsar's pulses sometimes came a little early and sometimes a little late in a periodic pattern, revealing that an unseen object was pulling the pulsar toward and away from Earth. The environment of a pulsar, which emits X rays and other strong radiation that would be harmful to life on Earth, is so extreme that these objects would have little resemblance to planets in our solar system.

The wobbling of a star changes the star’s light that reaches Earth. When the star moves away from Earth, even slightly, each wave of light must travel farther to Earth than the wave before it. This increases the distance between waves (called the wavelength) as the waves reach Earth. When a star’s planet pulls the star closer to Earth, each successive wavefront has less distance to travel to reach Earth. This shortens the wavelength of the light that reaches Earth. This effect is called the Doppler effect. No star moves fast enough for the change in wavelength to result in a noticeable change in colour, which depends on wavelength, but the changes in wavelength can be measured with precise instruments. Because the planet’s effect on the star is very small, astronomers must analyse the starlight carefully to detect a shift in wavelength. They do this by first using a technique called spectroscopy to separate the white starlight into its component colours, as water vapour does to sunlight in a rainbow. Stars emit light in a continuous range. The range of wavelengths a star emits is called the star’s spectrum. This spectrum had dark lines, called absorption lines, at wavelengths at which atoms in the outermost layers of the star absorb light.

Astronomers know what the exact wavelength of each absorption line is for a star that is not moving. By seeing how far the movement of a star shifts the absorption lines in its spectrum, astronomers can calculate how fast the star is moving. If the motion fits the model of the effect of a planet, astronomers can calculate the mass of the planet and how close it is to the star. These calculations can only provide the lower limit to the planet’s mass, because telling at what angle the planet orbits. The star is impossible for astronomers. Astronomers need to know the angle at which the planet orbits the star to calculate the planet’s mass accurately. Because of this uncertainty, some of the giant extra solar planets may be a type of failed star called a brown dwarf instead of planets. Most astronomers believe that many of the suspected planets are true planets.

Between 1995 and 1999 astronomers discovered more than a dozen extra solar planets. Astronomers now know of far more planets outside our solar system than inside our solar system. Most of these planets, surprisingly, are more massive than Jupiter and are orbiting so close to their parent stars that some of them have ‘years’ (the time it takes to orbit the parent star once) as long as only a few days on Earth. These solar systems are so different from our solar system that astronomers are still trying to reconcile them with the current theory of solar system formation. Some astronomers suggest that the giant extra solar planets formed much farther away from their stars and were later thrown into the inner solar systems by some gravitational interaction.

Stars are an important topic of astronomical research. Stars are balls of gas that shine or used to shine because of nuclear fusion in their cores. The most familiar star is the Sun. The nuclear fusion in stars produces a force that pushes the material in a star outward. However, the gravitational attraction of the star’s material for itself pulls the material inward. A star can remain stable as long as the outward pressure and gravitational force balance. The properties of a star depend on its mass, its temperature, and its stage in evolution.

Astronomers study stars by measuring their brightness or, with more difficulty, their distances from Earth. They measure the ‘colour’ of a star~the differences in the star’s brightness from one part of the spectrum to another~to determine its temperature. They also study the spectrum of a star’s light to determine not only the temperature, but also the chemical makeup of the star’s outer layers.

Many different types of stars exist. Some types of stars are really just different stages of a star’s evolution. Some types are different because the stars formed with much more or much less mass than other stars, or because they formed close to other stars. The Sun is a type of star known as a main~sequence star. Eventually, main~sequence stars such as the Sun swell into giant stars and then evolve into tiny, dense, white dwarf stars. Main~sequence stars and giants have a role in the behaviour of most variable stars and novas. A star much more massive than the Sun will become a supergiant star, then explode as a supernova. A supernova may leave behind a neutron star or a black hole.

In about 1910 Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell independently worked out a way to graph basic properties of stars. On the horizontal axis of their graphs, they plotted the temperatures of stars. On the vertical axis, they plotted the brightness of stars in a way that allowed the stars to be compared. (One plotted the absolute brightness, or absolute magnitude, of a star, a measurement of brightness that takes into account the distance of the star from Earth. The other plotted stars in a nearby galaxy, all about the same distance from Earth.)

On an H~R diagram, the brightest stars are at the top and the hottest stars are at the left. Hertzsprung and Russell found that most stars fell on a diagonal line across the H~R diagram from upper left lower to right. This line is called the main sequence. The diagonal line of main~sequence stars indicates that temperature and brightness of these stars are directly related. The hotter a main~sequence stars is, the brighter it is. The Sun is a main~sequence star, located in about the middle of the graph. More faint, cool stars exist than hot, bright ones, so the Sun is brighter and hotter than most of the stars in the universe.

At the upper right of the H~R diagram, above the main sequence, stars are brighter than main~sequence stars of the same colour. The only way stars of a certain colour can be brighter than other stars of the same colour is if the brighter stars are also bigger. Bigger stars are not necessarily more massive, but they do have larger diameters. Stars that fall in the upper right of the H~R diagram are known as giant stars or, for even brighter stars, supergiant stars. Supergiant stars have both larger diameters and larger masses than giant stars.

Giant and supergiant stars represent stages in the lives of stars after they have burned most of their internal hydrogen fuel. Stars swell as they move off the main sequence, becoming giants and—for more massive stars~supergiants.

A few stars fall in the lower left portion of the H~R diagram, below the main sequence. Just as giant stars are larger and brighter than main~sequences stars, these stars are smaller and dimmer. These smaller, dimmer stars are hot enough to be white or blue~white in colour and are known as white dwarfs.

White dwarf stars are only about the size of Earth. They represent stars with about the mass of the Sun that have burned as much hydrogen as they can. The gravitational force of a white dwarf’s mass is pulling the star inward, but electrons in the star resist being pushed together. The gravitational force is able to pull the star into a much denser form than it was in when the star was burning hydrogen. The final stage of life for all stars like the Sun is the white dwarf stage.

Many stars vary in brightness over time. These variable stars come in a variety of types. One important type is called a Cepheid variable, named after the star delta Cepheid, which is a prime example of a Cepheid variable. These stars vary in brightness as they swell and contract over a period of weeks or months. Their average brightness depends on how long the period of variation takes. Thus astronomers can determine how bright the star is merely by measuring the length of the period. By comparing how intrinsically bright these variable stars are with how bright they look from Earth, astronomers can calculate how far away these stars are from Earth. Since they are giant stars and are very bright, Cepheid variables in other galaxies are visible from Earth. Studies of Cepheid variables tell astronomers how far away these galaxies are and are very useful for determining the distance scale of the universe. The Hubble Space Telescope (HST) can determine the periods of Cepheid stars in galaxies farther away than ground~based telescopes can see. Astronomers are developing a more accurate idea of the distance scale of the universe with HST data.

Cepheid variables are only one type of variable star. Stars called long~period variables vary in brightness as they contract and expand, but these stars are not as regular as Cepheid variables. Mira, a star in the constellation Cetus (the whale), is a prime example of a long~period variable star. Variable stars called eclipsing binary stars are really pairs of stars. Their brightness varies because one member of the pair appears to pass in front of the other, as seen from Earth. A type of variable star called R Coronae Borealis stars varies because they occasionally give off clouds of carbon dust that dim these stars.

Sometimes stars brighten drastically, becoming as much as 100 times brighter than they were. These stars are called novas (Latin for ‘new stars’). They are not really new, just much brighter than they were earlier. A nova is a binary, or double, star in which one member is a white dwarf and the other is a giant or supergiant. Matter from the large star falls onto the small star. After a thick layer of the large star’s atmosphere has collected on the white dwarf, the layer burns off in a nuclear fusion reaction. The fusion produces a huge amount of energy, which, from Earth, appears as the brightening of the nova. The nova gradually returns to its original state, and material from the large star again begins to collect on the white dwarf.

Sometimes stars brighten many times more drastically than novas do. A star that had been too dim to see can become one of the brightest stars in the sky. These stars are called supernovas. Sometimes supernovas that occur in other galaxies are so bright that, from Earth, they appear as bright as their host galaxy.

There are two types of supernovas. One type is an extreme case of a nova, in which matter falls from a giant or supergiant companion onto a white dwarf. In the case of a supernova, the white dwarf gains so much fuel from its companion that the star increases in mass until strong gravitational forces cause it to become unstable. The star collapses and the core explodes, vaporizing a lot of the white dwarves and producing an immense amount of light. Only bits of the white dwarf remain after this type of supernova occurs.

The other type of supernova occurs when a supergiant star uses up all its nuclear fuel in nuclear fusion reactions. The star uses up its hydrogen fuel, but the core is hot enough that it provides the initial energy necessary for the star to begin ‘burning’ helium, then carbon, and then heavier elements through nuclear fusion. The process stops when the core is mostly iron, which is too heavy for the star to ‘burn’ in a way that gives off energy. With no such fuel left, the inward gravitational attraction of the star’s material for itself has no outward balancing force, and the core collapses. As it collapses, the core releases a shock wave that tears apart the star’s atmosphere. The core continues collapsing until it forms either a neutron star or a black hole, depending on its mass

Only a handfuls of supernovas are known in our galaxy. The last Milky Way supernova seen from Earth was observed in 1604. In 1987 astronomers observed a supernova in the Large Magellanic Cloud, one of the Milky Way’s satellite galaxies. This supernova became bright enough to be visible to the unaided eye and is still under careful study from telescopes on Earth and from the Hubble Space Telescope. A supernova in the process of exploding emits radiation in the X~ray range and ultraviolet and radio radiation studies in this part of the spectrum are especially useful for astronomers studying supernova remnants.

Neutron stars are the collapsed cores sometimes left behind by supernova explosions. Pulsars are a special type of neutron star. Pulsars and neutron stars form when the remnant of a star left after a supernova explosion collapses until it is about 10 km. (about 6 mi.) in radius. At that point, the neutrons~electrically neutral atomic particles~of the star resists being pressed together further. When the force produced by the neutrons, balances, the gravitational force, the core stops collapsing. At that point, the star is so dense that a teaspoonful has the mass of a billion metric tons.

Neutron stars become pulsars when the magnetic field of a neutron star directs a beam of radio waves out into space. The star is so small that it rotates from one to a few hundred times per second. As the star rotates, the beam of radio waves sweeps out a path in space. If Earth is in the path of the beam, radio astronomers see the rotating beam as periodic pulses of radio waves. This pulsing is the reason these stars are called pulsars.

Some neutron stars are in binary systems with an ordinary star neighbour. The gravitational pull of a neutron star pulls material off its neighbour. The rotation of the neutron star heats the material, causing it to emit X~rays. The neutron star’s magnetic field directs the X~rays into a beam that sweeps into space and may be detected from Earth. Astronomers call these stars X~ray pulsars.

Gamma~ray spacecraft detect bursts of gamma rays about once a day. The bursts come from sources in distant galaxies, so they must be extremely powerful for us to be able to detect them. A leading model used to explain the bursts are the merger of two neutron stars in a distant galaxy with a resulting hot fireball. A few such explosions have been seen and studied with the Hubble and Keck telescopes.

Black holes are objects that are so massive and dense that their immense gravitational pull does not even let light escape. If the core left over after a supernova explosion has a mass of more than about fives times that of the Sun, the force holding up the neutrons in the core is not large enough to balance the inward gravitational force. No outward force is large enough to resist the gravitational force. The core of the star continues to collapse. When the core's mass is sufficiently concentrated, the gravitational force of the core is so strong that nothing, not even light, can escape it. The gravitational force is so strong that classical physics no longer applies, and astronomers use Einstein’s general theory of relativity to explain the behaviour of light and matter under such strong gravitational forces. According to general relativity, space around the core becomes so warped that nothing can escape, creating a black hole. A star with a mass ten times the mass of the Sun would become a black hole if it were compressed to 90 km. (60 mi.) or less in diameter.

Astronomers have various ways of detecting black holes. When a black hole is in a binary system, matter from the companion star spirals into the black hole, forming a disk of gas around it. The disk becomes so hot that it gives off X rays that astronomers can detect from Earth. Astronomers use X~ray telescopes in space to find X~ray sources, and then they look for signs that an unseen object of more than about five times the mass of the Sun is causing gravitational tugs on a visible object. By 1999 astronomers had found about a dozen potential black holes.

The basic method that astronomers use to find the distance of a star from Earth uses parallax. Parallax is the change in apparent position of a distant object when viewed from different places. For example, imagine a tree standing in the centre of a field, with a row of buildings at the edge of the field behind the tree. If two observers stand at the two front corners of the field, the tree will appear in front of a different building for each observer. Similarly, a nearby star's position appears different when seen from different angles.

Parallax also allows human eyes to judge distance. Each eye sees an object from a different angle. The brain compares the two pictures to judge the distance to the object. Astronomers use the same idea to calculate the distance to a star. Stars are very far away, so astronomers must look at a star from two locations as far apart as possible to get a measurement. The movement of Earth around the Sun makes this possible. By taking measurements six months apart from the same place on Earth, astronomers take measurements from locations separated by the diameter of Earth’s orbit. That is a separation of about 300 million km (186 million mi). The nearest stars will appear to shift slightly with respect to the background of more distant stars. Even so, the greatest stellar parallax is only about 0.77 seconds of arc, an amount 4,600 times smaller than a single degree. Astronomers calculate a star’s distance by dividing one by the parallax. Distances of stars are usually measured in parsecs. A parsec is 3.26 light~years, and a light~year is the distance that light travels in a year, or about 9.5 trillion km (5.9 trillion mi). Proxima Centauri, the Sun’s nearest neighbour, has a parallax of 0.77 seconds of arc. This measurement indicates that Proxima Centauri’s distance from Earth is about 1.3 parsecs, or 4.2 light ~years. Because Proxima Centauri is the Sun’s nearest neighbours, it has a larger parallax than any other star.

Astronomers can measure stellar parallaxes for stars up to about 500 light~years away, which is only about 2 percent of the distance to the centre of our galaxy. Beyond that distance, the parallax angle is too small to measure.

A European Space Agency spacecraft named Hipparcos (an acronym for High Precision Parallax Collecting Satellite), launched in 1989, gave a set of accurate parallaxes across the sky that was released in 1997. This set of measurements has provided a uniform database of stellar distances for more than 100,000 stars and to some degree less accurate database of more than one million stars. These parallax measurements provide the base for measurements of the distance scale of the universe. Hipparcos data are leading to more accurate age calculations for the universe and for objects in it, especially globular clusters of stars.

Astronomers use a star’s light to determine the star’s temperature, composition, and motion. Astronomers analyse a star’s light by looking at its intensity at different wavelengths. Blue light has the shortest visible wavelengths, at about 400 nanometres. (A nanometre, abbreviated ‘nm’, is one billionth of a metre, or about one forty~thousandth of an inch.) Red light has the longest visible wavelengths, at about 650 nm. A law of radiation known as Wien's displacement law (developed by German physicist Wilhelm Wien) links the wavelength at which the most energy is given out by an object and its temperature. A star like the Sun, whose surface temperature is about 6000 K (about 5730°C or about 10,350°F), gives off the most radiation in yellow~green wavelengths, with decreasing amounts in shorter and longer wavelengths. Astronomers put filters of different standard colours on telescopes to allow only light of a particular colour from a star to pass. In this way, astronomers determine the brightness of a star at particular wavelengths. From this information, astronomers can use Wien’s law to determine the star’s surface temperature.

Astronomers can see the different wavelengths of light of a star in more detail by looking at its spectrum. The continuous rainbow of colour of a star's spectrum is crossed by dark lines, or spectral lines. In the early 19th century, German physicist Josef Fraunhofer identified such lines in the Sun's spectrum, and they are still known as Fraunhofer lines. American astronomer Annie Jump Cannon divided stars into several categories by the appearance of their spectra. She labelled them with capital letters according to how dark their hydrogen spectral lines were. Later astronomers reordered these categories according to decreasing temperature. The categories are O, B, A, F, G, K, and M, where O stars are the hottest and M stars are the coolest. The Sun is a G star. An additional spectral type, L stars, was suggested in 1998 to accommodate some cool stars studied using new infrared observational capabilities. Detailed study of spectral lines shows the physical conditions in the atmospheres of stars. Careful study of spectral lines shows that some stars have broader lines than others of the same spectral type. The broad lines indicate that the outer layers of these stars are more diffuse, meaning that these layers are larger, but spread more thinly, than the outer layers of other stars. Stars with large diffuse atmospheres are called giants. Giant stars are not necessarily more massive than other stars~the outer layers of giant stars are just more spread out.

Many stars have thousands of spectral lines from iron and other elements near iron in the periodic table. Other stars of the same temperature have very few spectral lines from such elements. Astronomers interpret these findings to mean that two different populations of stars exist. Some formed long ago, before supernovas produced the heavy elements, and others formed more recently and incorporated some heavy elements. The Sun is one of the more recent stars.

Spectral lines can also be studied to see if they change in wavelength or are different in wavelength from sources of the same lines on Earth. These studies tell us, according to the Doppler effect, how much the star is moving toward or away from us. Such studies of starlight can tell us about the orbits of stars in binary systems or about the pulsations of variable stars, for example.

Astronomers study galaxies to learn about the structure of the universe. Galaxies are huge collections of billions of stars. Our Sun is part of the Milky Way Galaxy. Galaxies also contain dark strips of dust and may contain huge black holes at their centres. Galaxies exist in different shapes and sizes. Some galaxies are spirals, some are oval, or elliptical, and some are irregular. The Milky Way is a spiral galaxy. Galaxies tend to group together in clusters.

Our Sun is only one of about 400 billion stars in our home galaxy, the Milky Way. On a dark night, far from outdoor lighting, a faint, hazy, whitish band spans the sky. This band is the Milky Way Galaxy as it appears from Earth. The Milky Way looks splotchy, with darker regions interspersed with lighter ones.

The Milky Way Galaxy is a pinwheel~shaped flattened disk about 75,000 light~years in diameter. The Sun is located on a spiral arm about two~thirds of the way out from the centre. The galaxy spins, but the centre spins faster than the arms. At Earth’s position, the galaxy makes a complete rotation about every 200 million years.

When observers on Earth look toward the brightest part of the Milky Way, which is in the constellation Sagittarius, they look through the galaxy’s disk toward its centre. This disk is composed of the stars, gas, and dust between Earth and the galactic centre. When observers look in the sky in other directions, they do not see as much of the galaxy’s gas and dust, and so can see objects beyond the galaxy more clearly.

The Milky Way Galaxy has a core surrounded by its spiral arms. A spherical cloud containing about 100 examples of a type of star cluster known as a globular cluster surrounds the galaxy. Still, farther out is a galactic corona. Astronomers are not sure what types of particles or objects occupy the corona, but these objects do exert a measurable gravitational force on the rest of the galaxy. Galaxies contain billions of stars, but the space between stars is not empty. Astronomers believe that almost every galaxy probably has a huge black hole at its centre.

The space between stars in a galaxy consists of low

~ density gas and dust. The dust is largely carbon given off by red~giant stars. The gas is largely hydrogen, which accounts for 90 percent of the atoms in the universe. Hydrogen exists in two main forms in the universe. Astronomers give complete hydrogen atoms, with a nucleus and an electron, a designation of the Roman numeral I, or HI. Ionized hydrogen, hydrogen made up of atoms missing their electrons, is given the designation II, or HII. Clouds, or regions, of both types of hydrogen exist between the stars. HI regions are too cold to produce visible radiation, but they do emit radio waves that are useful in measuring the movement of gas in our own galaxy and in distant galaxies. The HII regions form around hot stars. These regions emit diffuse radiation in the visual range, as well as in the radio, infrared, and ultraviolet ranges. The cloudy light from such regions forms beautiful nebulas such as the Great Orion Nebula.

Astronomers have located more than 100 types of molecules in interstellar space. These molecules occur only in trace amounts among the hydrogens. Still, astronomers can use these molecules to map galaxies. By measuring the density of the molecules throughout a galaxy, astronomers can get an idea of the galaxy’s structure. interstellar dust sometimes gathers to form dark nebulae, which appear in silhouette against background gas or stars from Earth. The Horsehead Nebula, for example, is the silhouette of interstellar dust against a background HI region.

The first known black holes were the collapsed cores of supernova stars, but astronomers have since discovered signs of much larger black holes at the centres of galaxies. These galactic black holes contain millions of times as much mass as the Sun. Astronomers believe that huge black holes such as these provide the energy of mysterious objects called quasars. Quasars are very distant objects that are moving away from Earth at high speed. The first ones discovered were very powerful radio sources, but scientists have since discovered quasars that don’t strongly emit radio waves. Astronomers believe that almost every galaxy, whether spiral or elliptical, has a huge black hole at its centre.

Astronomers look for galactic black holes by studying the movement of galaxies. By studying the spectrum of a galaxy, astronomers can tell if gas near the centre of the galaxy is rotating rapidly. By measuring the speed of rotation and the distance from various points in the galaxy to the centre of the galaxy, astronomers can determine the amount of mass in the centre of the galaxy. Measurements of many galaxies show that gas near the centre is moving so quickly that only a black hole could be dense enough to concentrate so much mass in such a small space. Astronomers suspect that a significant black hole occupies even the centre of the Milky Way. The clear images from the Hubble Space Telescope have allowed measurements of motions closer to the centres of galaxies than previously possible, and have led to the confirmation in several cases that giant black holes are present.

Galaxies are classified by shape. The three types are spiral, elliptical, and irregular. Spiral galaxies consist of a central mass with one, two, or three arms that spiral around the centre. An elliptical galaxy is oval, with a bright centre that gradually, evenly dims to the edges. Irregular galaxies are not symmetrical and do not look like spiral or elliptical galaxies. Irregular galaxies vary widely in appearance. A galaxy that has a regular spiral or elliptical shape but has, some special oddity is known as a peculiar galaxy. For example, some peculiar galaxies are stretched and distorted from the gravitational pull of a nearby galaxy.

Spiral galaxies are flattened pinwheels in shape. They can have from one to three spiral arms coming from a central core. The Great Andromeda Spiral Galaxy is a good example of a spiral galaxy. The shape of the Milky Way is not visible from Earth, but astronomers have measured that the Milky Way is also a spiral galaxy. American astronomer Edwin Hubble further classified spirals galaxies by the tightness of their spirals. In order of increasingly open arms, Hubble’s types are Sa, Sb., and Sc. Some galaxies have a straight, bright, bar~shaped feature across their centre, with the spiral arms coming off the bar or off a ring around the bar. With a capital B for the bar, the Hubble types of these galaxies are SBa, SBb, and Sbc.

Many clusters of galaxies have giant elliptical galaxies at their centres. Smaller elliptical galaxies, called dwarf elliptical galaxies, are much more common than giant ones. Most of the two dozen galaxies in the Milky Way’s Local Group of galaxies are dwarf elliptical galaxies.

Astronomers classify elliptical galaxies by how oval they look, ranging from E0 for very round to E3 for intermediately oval to E7 for extremely elongated. The galaxy class E7 is also called S0, which is also known as a lenticular galaxy, a shape with an elongated disk but no spiral arms. Because astronomers can see other galaxies only from the perspective of Earth, the shape astronomers see is not necessarily the exact shape of a galaxy. For instance, they may be viewing it from an end, and not from above or below.

Some galaxies have no structure, while others have some trace of structure but do not fit the spiral or elliptical classes. All of these galaxies are called irregular galaxies. The two small galaxies that are satellites to the Milky Way Galaxy are both irregular. They are known as the Magellanic Clouds. The Large Magellanic Cloud shows signs of having a bar in its centre. The Small Magellanic Cloud is more formless. Studies of stars in the Large and Small Magellanic Clouds have been fundamental for astronomers’ understanding of the universe. Each of these galaxies provides groups of stars that are all at the same distance from Earth, allowing astronomers to compare the absolute brightness of these stars.

In the late 1920s American astronomer Edwin Hubble discovered that all but the nearest galaxies to us are receding, or moving away from us. Further, he found that the farther away from Earth a galaxy is, the faster it is receding. He made his discovery by taking spectra of galaxies and measuring the amount by which the wavelengths of spectral lines were shifted. He measured distance in a separate way, usually from studies of Cepheid variable stars. Hubble discovered that essentially all the spectra of all the galaxies were shifted toward the red, or had red~shifts. The red~shifts of galaxies increased with increasing distance from Earth. After Hubble’s work, other astronomers made the connection between red~shift and velocity, showing that the farther a galaxy is from Earth, the faster it moves away from Earth. This idea is called Hubble’s law and is the basis for the belief that the universe is uniformly expanding. Other uniformly expanding three~dimensional objects, such as a rising cake with raisins in the batter, also demonstrate the consequence that the more distant objects (such as the other raisins with respect to any given raisin) appear to recede more rapidly than nearer ones. This consequence is the result of the increased amount of material expanding between these more distant objects.

Hubble's law state that there is a straight~line, or linear, relationship between the speed at which an object is moving away from Earth and the distance between the object and Earth. The speed at which an object is moving away from Earth is called the object’s velocity of recession. Hubble’s law indicates that as velocity of recession increases, distance increases by the same proportion. Using this law, astronomers can calculate the distance to the most~distant galaxies, given only measurements of their velocities calculated by observing how much their light is shifted. Astronomers can accurately measure the red~shifts of objects so distant that the distance between Earth and the objects cannot be measured by other means.

The constant of proportionality that relates velocity to distance in Hubble's law is called Hubble's constant, or H. Hubble's law is often written v Hd, or velocity equals Hubble's constant multiplied by distance. Thus determining Hubble's constant will give the speed of the universe's expansion. The inverse of Hubble’s constant, or 1/H, theoretically provides an estimate of the age of the universe. Astronomers now believe that Hubble’s constant has changed over the lifetime of the universe, however, so estimates of expansion and age must be adjusted accordingly.

The value of Hubble’s constant probably falls between sixty~four and 78 kilometres per second per mega~parsec (between forty and 48 miles per second per mega~parsec). A mega~parsec is one million parsecs and a parsec is 3.26 light~years. The Hubble Space Telescope studied Cepheid variables in distant galaxies to get an accurate measurement of the distance between the stars and Earth to refine the value of Hubble’s constant. The value they found is 72 kilometres per second per mega~parsec (45 miles per second per mega~parsec), with an uncertainty of only 10 percent

The actual age of the universe depends not only on Hubble's constant but also on how much the gravitational pull of the mass in the universe slows the universe’s expansion. Some data from studies that use the brightness of distant supernovas to assess distance indicate that the universe's expansion is speeding up instead of slowing. Astronomers invented the term ‘dark energy’ for the unknown cause of this accelerating expansion and are actively investigating these topics. The ultimate goal of astronomers is to understand the structure, behaviour, and evolution of all of the matter and energy that exist. Astronomers call the set of all matter and energy the universe. The universe is infinite in space, but astronomers believe it does have a finite age. Astronomers accept the theory that about fourteen billion years ago the universe began as an explosive event resulting in a hot, dense, expanding sea of matter and energy. This event is known as the big bang Astronomers cannot observe that far back in time. Many astronomers believe, however, the theory that within the first fraction of a second after the big bang, the universe went through a tremendous inflation, expanding many times in size, before it resumed a slower expansion.

As the universe expanded and cooled, various forms of elementary particles of matter formed. By the time the universe was one second old, protons had formed. For approximately the next 1,000 seconds, in the era of nucleosynthesis, all the nuclei of deuterium (hydrogen with both a proton and neutron in the nucleus) that are present in the universe today formed. During this brief period, some nuclei of lithium, beryllium, and helium formed as well.

When the universe was about one million years old, it had cooled to about 3000 K (about 3300°C or about 5900°F). At that temperature, the protons and heavier nuclei formed during nucleosynthesis could combine with electrons to form atoms. Before electrons combined with nuclei, the travel of radiation through space was very difficult. Radiation in the form of photons (packets of light energy) could not travel very far without colliding with electrons. Once protons and electrons combined to form hydrogen, photons became able to travel through space. The radiation carried by the photons had the characteristic spectrum of a hot gas. Since the time this radiation was first released, it has cooled and is now 3 K (~270°C or~450°F). It is called the primeval background radiation and has been definitively detected and studied, first by radio telescopes and then by the Cosmic Background Explorer (COBE) and Wilkinson Microwave Anisotropy Probe (WMAP) spacecrafts. COBE, WMAP, and ground~based radio telescopes detected tiny deviations from uniformity in the primeval background radiation; these deviations may be the seeds from which clusters of galaxies grew.

The gravitational force from invisible matter, known as dark matter, may have helped speed the formation of structure in the universe. Observations from the Hubble Space Telescope have revealed older galaxies than astronomers expected, reducing the interval between the big bang and the formation of galaxies or clusters of galaxies.

From about two billion years after the big bang for another two billion years, quasars formed as active giant black holes in the cores of galaxies. These quasars gave off radiation as they consumed matter from nearby galaxies. Few quasars appear close to Earth, so quasars must be a feature of the earlier universe.

A population of stars formed out of the interstellar gas and dust that contracted to form galaxies. This first population, known as Population II, was made up almost entirely of hydrogen and helium. The stars that formed evolved and gave out heavier elements that were made through fusion in the stars’ cores or that was formed as the stars exploded as supernovas. The later generation of stars, to which the Sun belongs, is known as Population I and contains heavy elements formed by the earlier population. The Sun formed about five billion years ago and is almost halfway through its 11~billion~year lifetime

About 4.6 billion years ago, our solar system formed. The oldest fossils of a living organism date from about 3.5 billion years ago and represent Cyanobacteria. Life evolved, and sixty~five million years ago, the dinosaurs and many other species were extinguished, probably from a catastrophic meteor impact. Modern humans evolved no earlier than a few hundred thousand years ago, a blink of an eye on the cosmic timescale.

Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at Williams College in Williamstown, Massachusetts, confronts this question in this discussion of cosmology. Whether the universe will go on expanding forever, depends on whether there is enough critical density to halt or reverse the expansion, and the answer to that question may, in turn, depend on the existence of something the German~born American physicist Albert Einstein once labelled the cosmological constant.

New technology allows astronomers to peer further into the universe than ever before. The science of cosmology, the study of the universe as a whole, has become an observational science. Scientists may now verify, modify, or disprove theories that were partially based on guesswork.

In the 1920s, the early days of modern cosmology, it took an astronomer all night at a telescope to observe a single galaxy. Current surveys of the sky will likely compile data for a million different galaxies within a few years. Building upon advances in cosmology over the past century, our understanding of the universe should continue to accelerate

Modern cosmology began with the studies of Edwin Hubble, who measured the speeds that galaxies move toward or away from us in the mid~1920s. By observing red~shift~the change in wavelength of the light that galaxies give off as they move away from us~Hubble realized that though the nearest galaxies are approaching us, all distant galaxies are receding. The most~distant galaxies are receding most rapidly. This observation is consistent with the characteristics of an expanding universe. Since 1929 an expanding universe has been the first and most basic pillar of cosmology.

In 1990 the National Aeronautics and Space Administration (NASA) launched the Hubble Space Telescope (HST), named to honour the pioneer of cosmology. Appropriately, determining the rate at which the universe expands was one of the telescope’s major tasks.

One of the HST’s key projects was to study Cepheid variables (stars that varies greatly in brightness) and to measure distances in space. Another set of Hubble’s observations focuses on supernovae, exploding stars that can be seen at very great distances because they are so bright. Studies of supernovae in other galaxies reveal the distances to those galaxies.

The term big bang refers to the idea that the expanding universe can be traced back in time to an initial explosion. In the mid~1960s, physicists found important evidence of the big bang when they detected faint microwave radiation coming from every part of the sky. Astronomers think this radiation originated about 300,000 years after the big bang, when the universe thinned enough to become transparent. The existence of cosmic microwave background radiation, and its interpretation, is the second pillar of modern cosmology.

Also in the 1960s, astronomers realized that the lightest of the elements, including hydrogen, helium, lithium, and boron, were formed mainly at the time of the big bang. What is most important, deuterium (the form of hydrogen with an extra neutron added to normal hydrogen's single proton) was formed only in the era of nucleosynthesis? This era started about one second after the universe was formed and made up the first three minutes or so after the big bang. No sources of deuterium are known since that early epoch. The current ratio of deuterium to regular hydrogen depends on how dense the universe was at that early time, so studies of the deuterium that can now be detected indicate how much matter the universe contains. These studies of the origin of the light elements are the third pillar of modern cosmology.

Until recently many astronomers disagreed on whether the universe was expected to expand forever or eventually stop expanding and collapse in on itself in a ‘big crunch.’

At the General Assembly of the International Astronomical Union (IAU) held in August 2000, a consistent picture of cosmology emerged. This picture depends on the current measured value for the expansion rate of the universe and on the density of the universe as calculated from the abundances of the light elements. The most recent studies of distant supernovae seem to show that the universe's expansion is accelerating, not slowing. Astronomers have recently proposed a theoretical type of negative energy~which would provide a force that opposes the attraction of gravity~to explain the accelerating universe.

For decades scientists have debated the rate at which the universe is expanding. We know that the further away a galaxy is, the faster it moves away from us. The question is: How fast are galaxies receding for each unit of distance they are away from us? The current value, as announced at the IAU meeting, is 75 km/s/Mpc, that is, for each mega~parsec of distance from us (where each mega~parsec is 3.26 million light~years), the speed of expansion increases by 75 kilometres per second.

What’s out there, exactly?

In the picture of expansion held until recently, astronomers thought the universe contained just enough matter and energy so that it would expand forever but expand at a slower and slower rate as time went on. The density of matter and energy necessary for this to happen is known as the critical density.

Astronomers now think that only 5 percent or so of the critical density of the universe is made of ordinary matter. Another 25 percent or so of the critical density is made of dark matter, a type of matter that has gravity but that has not been otherwise detected. The accelerating universe, further, shows that the remaining 70 percent of the critical density is made of a strange kind of energy, perhaps that known as the cosmological constant, an idea tentatively invoked and then abandoned by Albert Einstein in equations for his general theory of relativity.

Some may be puzzled: Didn't we learn all about the foundations of physics when we were still at school? The answer is ‘yes’ or ‘no’, depending on the interpretation. We have become acquainted with concepts and general relations that enable us to comprehend an immense range of experiences and make them accessible to mathematical treatment. In a certain sense these concepts and relations are probably even final. This is true, for example, of the laws of light refraction, of the relations of classical thermodynamics as far as it is based on the concepts of pressure, volume, temperature, heat and work, and of the hypothesis of the nonexistence of a perpetual motion machine.

What, then, impels us to devise theory after theory? Why do we devise theories at all? The answer to the latter question is simple: Because we enjoy ‘comprehending’, i.e., reducing phenomena by the process of logic to something already known or (apparently) evident. New theories are first of all necessary when we encounter new facts that cannot be ‘explained’ by existing theories. Nevertheless, this motivation for setting up new theories is, so to speak, trivial, imposed from without. There is another, more subtle motive of no less importance. This is the striving toward unification and simplification of the premises of the theory as a whole (i.e., Mach's principle of economy, interpreted as a logical principle).

There exists a passion for comprehension, just as there exists a passion for music. That passion is altogether common in children, but gets lost in most people later on. Without this passion, there would be neither mathematics nor natural science. Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally, by pure thought, without any empirical foundations~in short, by metaphysics. I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a

‘positivist’, he may fancy himself. The metaphysicist believes that the logically simple are also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be ‘comprehended’ on the basis of a conceptual system built on premises of great simplicity. The skeptic will say that this is a ‘miracle creed’. Admittedly so, but it is a miracle creed that has been borne out to an amazing extent by the development of science.

The rise of atomism is a good example. How may Leucippus have conceived this bold idea? When water freezes and becomes ice~apparently something entirely different from water~why is it that the thawing of the ice forms something that seems indistinguishable from the original water? Leucippus is puzzled and looks for an ‘explanation’. He is driven to the conclusion that in these transitions the ‘essence’, of the thing has not changed at all. Maybe the thing consists of immutable particles and the change is only a change in their spatial arrangement. Could it not be that the same is true of all material objects that emerge again and again with nearly identical qualities?

This idea is not entirely lost during the long hibernation of occidental thought. Two thousand years after Leucippus, Bernoulli wonders why gas exerts pressure on the walls of a container. Should this be ‘explained’ by mutual repulsion of the parts of the gas, in the sense of Newtonian mechanics? This hypothesis appears absurd, for the gas pressure depends on the temperature, all other things being equal. To assume that the Newtonian forces of interaction depend on temperature is contrary to the spirit of Newtonian mechanics. Since Bernoulli is aware of the concept of atomism, he is bound to conclude that the atoms (or molecules) collide with the walls of the container and in doing so exert pressure. After all, one has to assume that atoms are in motion; how else can one account for the varying temperature of gases?

A simple mechanical consideration shows that this pressure depends only on the kinetic energy of the particles and on their density in space. This should have led the physicists of that age to the conclusion that heat consists in random motion of the atoms. Had they taken this consideration as seriously as it deserved to be taken, the development of the theory of heat~in particular the discovery of the equivalence of heat and mechanical energy~would have been considerably facilitated.

This example is meant to illustrate two things. The theoretical idea (atomism in this case) does not arise apart and independent of experience; nor can it be derived from experience by a purely logical procedure. It is produced by a creative act. Once a theoretical idea has been acquired, one does well to hold fast to it until it leads to an untenable conclusion.

In Newtonian physics the elementary theoretical concept on which the theoretical description of material bodies is based is the material point, or particle. Thus, matter is considered theoretically to be discontinuous. This makes it necessary to consider the action of material points on one another as ‘action at a distance’. Since the latter concept seems quite contrary to everyday experience, it is only natural that the contemporaries of Newton~and in fact, Newton himself found it difficult to accept. Owing to the almost miraculous success of the Newtonian system, however, the succeeding generations of physicists became used to the idea of action at a distance. Any doubt was buried for a long time to come.

All the same, when, in the second half of the 19th century, the laws of electrodynamics became known, it turned out that these laws could not be satisfactorily incorporated into the Newtonian system. It is fascinating to muse: Would Faraday have discovered the law of electromagnetic induction if he had received a regular college education? Unencumbered by the traditional way of thinking, he felt that the introduction of the ‘field’ as an independent element of reality helped him to coordinate the experimental facts. It was Maxwell who fully comprehended the significance of the field concept; he made the fundamental discovery that the laws of electrodynamics found their natural expression in the differential equations for the electric and magnetic fields. These equations implied the existence of waves, whose properties corresponded to those of light as far as they were known at that time.

This incorporation of optics into the theory of electromagnetism represents one of the greatest triumphs in the striving toward unification of the foundations of physics; Maxwell achieved this unification by purely theoretical arguments, long before it was corroborated by Hertz' experimental work. The new insight made it possible to dispense with the hypothesis of action at a distance, at least in the realm of electromagnetic phenomena; the intermediary field now appeared as the only carrier of electromagnetic interaction between bodies, and the field's behaviour was completely determined by contiguous processes, expressed by differential equations.

Now a question arose: Since the field exists even in a vacuum, should one conceive of the field as a state of a ‘carrier’, or should it be endowed with an independent existence not reducible to anything else? In other words, is there an ‘ether’ which carries the field; the ether being considered in the undulatory state, for example, when it carries light waves?

The question has a natural answer: Because one cannot dispense with the field concept, not introducing in addition a carrier with hypothetical properties is preferable. However, the pathfinder who first recognized the indispensability of the field concept were still too strongly imbued with the mechanistic tradition of thought to accept unhesitatingly this simple point of view. Nevertheless, in the course of the following decades this view imperceptibly took hold.

The introduction of the field as an elementary concept gave rise to an inconsistency of the theory as a whole. Maxwell's theory, although adequately describing the behaviour of electrically charged particles in their interaction with one another, does not explain the behaviours of electrical densities, i.e., it does not provide a theory of the particles themselves. They must therefore be treated as mass points on the basis of the old theory. The combination of the idea of a continuous field with that of material points discontinuous in space appears inconsistent. A consistent field theory requires continuity of all elements of the theory, not only in time but also in space, and in all points of space. Hence the material particle has no place as a fundamental concept in a field theory. Thus, even apart from the fact that gravitation is not included. Maxwell’s electrodynamics cannot be considered a complete theory.

Maxwell's equations for empty space remain unchanged if the spatial coordinates and the time are subjected to a particular linear transformations~the Lorentz transformations (‘covariance’ with respect to Lorentz transformations). Covariance also holds, of course, for a transformation that is composed of two or more such transformations; this is called the ‘group’ property of Lorentz transformations.

Maxwell's equations imply the ‘Lorentz group’, but the Lorentz group does not imply Maxwell's equations. The Lorentz group may effectively be defined independently of Maxwell's equations as a group of linear transformations that leave a particular value of the velocity~the velocity of light~invariant. These transformations hold for the transition from one ‘inertial system to another that is in uniform motion relative to the first. The most conspicuous novel property of this transformation group is that it does away with the absolute character of the concept of simultaneity of events distant from each other in space. On this account it is to be expected that all equations of physics are covariant with respect to Lorentz transformations (special theory of relativity). Thus it came about that Maxwell's equations led to a heuristic principle valid far beyond the range of the applicability or even validity of the equations themselves.

Special relativity has this in common with Newtonian mechanics: The laws of both theories are supposed to hold only with respect to certain coordinate systems: those known as ‘inertial systems’. An inertial system is a system in a state of motion such that ‘force~free’ material points within it are not accelerated with respect to the coordinate system. However, this definition is empty if there is no independent means for recognizing the absence of forces. Nonetheless, such a means of recognition does not exist if gravitation is considered as a ‘field’.

Let ‘A’ be a system uniformly accelerated with respect to an ‘inertial system’ I. Material points, not accelerated with respect to me, are accelerated with respect to ‘A’, the acceleration of all the points being equal in magnitude and direction. They behave as if a gravitational field exists with respect to ‘A’, for it is a characteristic property of the gravitational field that the acceleration is independent of the particular nature of the body. There is no reason to exclude the possibility of interpreting this behaviour as the effect of a ‘true’ gravitational field (principle of equivalence). This interpretation implies that ‘A’ is an ‘inertial system,’ even though it is accelerated with respect to another inertial system. (It is essential for this argument that the introduction of independent gravitational fields is considered justified even though no masses generating the field are defined. Therefore, to Newton such an argument would not have appeared convincing.) Thus the concepts of inertial system, the law of inertia and the law of motion are deprived of their concrete meaning~not only in classical mechanics but also in special relativity. Moreover, following up this train of thought, it turns out that with respect to A time cannot be measured by identical clocks; effectively, even the immediate physical significance of coordinate differences is generally lost. In view of all these difficulties, should one not try, after all, to hold on to the concept of the inertial system, relinquishing the attempt to explain the fundamental character of the gravitational phenomena that manifest themselves in the Newtonian system as the equivalence of inert and gravitational mass? Those who trust in the comprehensibility of nature must answer: No.

This is the gist of the principle of equivalence: In order to account for the equality of inert and gravitational mass within the theory admitting nonlinear transformations of the four coordinates is necessary. That is, the group of Lorentz transformations and hence the set of the ‘permissible’ coordinate systems has to be extended.

What group of coordinate transformations can then be substituted for the group of Lorentz transformations? Mathematics suggests an answer that is based on the fundamental investigations of Gauss and Riemann: namely, that the appropriate substitute is the group of all continuous (analytical) transformations of the coordinates. Under these transformations the only thing that remains invariant is the fact that neighbouring points have nearly the same coordinates; the coordinate system expresses only the topological order of the points in space (including its four~dimensional character). The equations expressing the laws of nature must be covariant with respect to all continuous transformations of the coordinates. This is the principle of general relativity.

The procedure just described overcomes a deficiency in the foundations of mechanics that had already been noticed by Newton and was criticized by Leibnitz and, two centuries later, by Mach: Inertia resists acceleration, but acceleration relative to what? Within the frame of classical mechanics the only answer is: Inactivity resists velocity relative to distances. This is a physical property of space~space acts on objects, but objects do not act on space. Such is probably the deeper meaning of Newton's assertion spatium est absolutum (space is absolute). Nevertheless, the idea disturbed some, in particular Leibnitz, who did not ascribe an independent existence to space but considered it merely a property of ‘things’ (contiguity of physical objects). Had his justified doubts won out at that time, it hardly would have been a boon to physics, for the empirical and theoretical foundations necessary to follow up his idea was not available in the 17th century.

According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables—the coordinates of space and time. It is just this particular kind of dependence that expresses the spatial character of physical reality.

Since the theory of general relativity implies the representation of physical reality by a continuous field, the concept of particles or material points cannot . . . play a fundamental part, nor can the concept of motion. The particle can only appear as a limited region in space in which the field strength or the energy density is particularly high.

A relativistic theory has to answer two questions: (1) What is the mathematical character of the field? What equations hold for this field?

Concerning the first question: From the mathematical point of view the field is essentially characterized by the way its components transform if a coordinate transformation is applied. Concerning the second (2) question: The equations must determine the field to a sufficient extent while satisfying the postulates of general relativity. Whether or not this requirement can be satisfied, depends on the choice of the field~type.

The attempts to comprehend the correlations among the empirical data on the basis of such a highly abstract program may at first appear almost hopeless. The procedure amounts, in fact, to putting the question: What most simple property can be required from what most simple object (field) while preserving the principle of general relativity? Viewed in formal logic, the dual character of the question appears calamitous, quite apart from the vagueness of the concept ‘simple’. Moreover, as for physics there is nothing to warrant the assumption that a theory that is ‘logically simple’ should also be ‘true’.

Yet every theory is speculative. When the basic concepts of a theory are comparatively ‘close to experience’ (e.g., the concepts of force, pressures, mass), its speculative character is not so easily discernible. If, however, a theory is such as to require the application of complicated logical processes in order to reach conclusions from the premises that can be confronted with observation, everybody becomes conscious of the speculative nature of the theory. In such a case an almost irresistible feeling of aversion arises in people who are inexperienced in epistemological analysis and who are unaware of the precarious nature of theoretical thinking in those fields with which they are familiar.

On the other hand, it must be conceded that a theory has an important advantage if its basic concepts and fundamental hypotheses are ‘close to experience’, and greater confidence in such a theory is justifiable. There is less danger of going completely astray, particularly since it takes so much less time and effort to disprove such theories by experience. Yet ever more, as the depth of our knowledge increases, we must give up this advantage in our quest for logical simplicity and uniformity in the foundations of physical theory. It has to be admitted that general relativity has gone further than previous physical theories in relinquishing ‘closeness to experience’ of fundamental concepts in order to attain logical simplicity. This holds all ready for the theory of gravitation, and it is even more true of the new generalization, which is an attempt to comprise the properties of the total field. In the generalized theory the procedure of deriving from the premises of the theory conclusions that can be confronted with empirical data is so difficult that so far no such result has been obtained. In favour of this theory are, at this point, its logical simplicity and its ‘rigidity’. Rigidity means here that the theory is either true or false, but not modifiable.

The greatest inner difficulty impeding the development of the theory of relativity is the dual nature of the problem, indicated by the two questions we have asked. This duality is the reason the development of the theory has taken place in two steps so widely separated in time. The first of these steps, the theory of gravitation, is based on the principle of equivalence discussed above and rests on the following consideration: According to the theory of special relativity, light has a constant velocity of propagation. If a light ray in a vacuum starts from a point, designated by the coordinates x1, x2 and x3 in a three~dimensional coordinate system, at the time x4, it spreads as a spherical wave and reaches a neighbouring point (x1 + dx1, x2 + dx2, x3 + dx3) at the time x4 + dx4. Introducing the velocity of light, c, we write the expression:

This expression represents an objective relation between neighbouring space~time points in four dimensions, and it holds for all inertial systems, provided the coordinate transformations are restricted to those of special relativity. The relation loses this form, however, if arbitrary continuous transformations of the coordinates are admitted in accordance with the principle of general relativity. The relation then assumes the more general form:

Σik gik dxi dxk=0

The gik are certain functions of the coordinates that transform in a definite way if a continuous coordinate transformation is applied. According to the principle of equivalence, these gik functions describe a particular kind of gravitational field: a field that can be obtained by transformation of ‘field~free’ space. The gik satisfies a particular law of transformation. Mathematically speaking, they are the components of a ‘tensor’ with a property of symmetry that is preserved in all transformations; the symmetrical property is expressed as follows:

gik=gki

The idea suggests itself: May we not ascribe objective meaning to such a symmetrical tensor, even though the field cannot be obtained from the empty space of special relativity by a mere coordinate transformation? Although we cannot expect that such a symmetrical tensor will describe the most general field, it may describe the particular case of the ‘pure gravitational field’. Thus it is evident what kind of field, at least for a special case, general relativity has to postulate: a symmetrical tensor field.

Hence only the second question is left: What kind of general covariant field law can be postulated for a symmetrical tensor field?

This question has not been difficult to answer in our time, since the necessary mathematical conceptions were already here in the form of the metric theory of surfaces, created a century ago by Gauss and extended by Riemann to manifolds of an arbitrary number of dimensions. The result of this purely formal investigation has been amazing in many respects. The differential equations that can be postulated as field law for gik cannot be of lower than second order, i.e., they must at least contain the second derivatives of the gik with respect to the coordinates. Assuming that no higher than second derivatives appear in the field law, it is mathematically determined by the principle of general relativity. The system of equations can be written in the form: Rik = 0. The Rik transforms in the same manner as the gik, i.e., they too form a symmetrical tensor.

These differential equations completely replace the Newtonian theory of the motion of celestial bodies provided the masses are represented as singularities of the field. In other words, they contain the law of force as well as the law of motion while eliminating ‘inertial systems’.

The fact that the masses appear as singularities indicate that these masses themselves cannot be explained by symmetrical gik fields, or ‘gravitational fields’. Not even the fact that only positive gravitating masses exist can be deduced from this theory. Evidently a complete relativistic field theory must be based on a field of more complex nature, that is, a generalization of the symmetrical tensor field.

The first observation is that the principle of general relativity imposes exceedingly strong restrictions on the theoretical possibilities. Without this restrictive principle hitting on the gravitational equations would be practically impossible for anybody, not even by using the principle of special relativity, even though one knows that the field has to be described by a symmetrical tensor. No amount of collection of facts could lead to these equations unless the principles of general relativity were used. This is the reason that all attempts to obtain a deeper knowledge of the foundations of physics seem doomed to me unless the basic concepts are in accordance with general relativity from the beginning. This situation makes it difficult to use our empirical knowledge, however comprehensive, in looking for the fundamental concepts and relations of physics, and it forces us to apply free speculation to a much greater extent than is presently assumed by most physicists. One may not see any reason to assume that the heuristic significance of the principle of general relativity is restricted to gravitation and that the rest of physics can be dealt with separately on the basis of special relativity, with the hope that later as a resultant circumstance brings about the whole that may be fitted consistently into a general relativistic scheme. One is to think that such an attitude, although historically understandable, can be objectively justified. The comparative smallness of what we know today as gravitational effects is not a conclusive reason for ignoring the principle of general relativity in theoretical investigations of a fundamental character. In other words, I do not believe that asking it is justifiable: What would physics look like without gravitation?

The second point we must note is that the equations of gravitation are ten differential equations for the ten components of the symmetrical tensor gik. In the case of a non~generalized relativity theory, a system is ordinarily not over determined if the number of equations is equal to the number of unknown functions. The manifold of solutions is such that within the general solution a certain number of functions of three variables can be chosen arbitrarily. For a general relativistic theory this cannot be expected as a matter of course. Free choice with respect to the coordinate system implies that out of the ten functions of a solution, or components of the field, four can be made to assume prescribed values by a suitable choice of the coordinate system. In other words, the principle of general relativity implies that the number of functions to be determined by differential equations is not ten but 10~4=6. For these six functions only six independent differential equations may be postulated. Only six out of the ten differential equations of the gravitational field ought to be independent of each other, while the remaining four must be connected to those six by means of four relations (identities). In earnest there exist among the left~hand sides, Rik, of the ten gravitational equations four identities ’Bianchi's identities’~which assure their ‘compatibility’.

In a case like this~when the number of field variables is equal to the number of differential equations~compatibility is always assured if the equations can be obtained from a variational principle. This is unquestionably the case for the gravitational equations.

However, the ten differential equations cannot be entirely replaced by six. The system of equations is verifiably ‘over determined’, but due to the existence of the identities it is over determined in such a way that its compatibility is not lost,i.e., the manifold of solutions is not critically restricted. The fact that the equations of gravitation imply the law of motion for the masses is intimately connected with this (permissible) over determination.

After this preparation understanding the nature of the present investigation without entering into the details of its mathematics is now easy. The problem is to set up a relativistic theory for the total field. The most important clue to its solution is that there exists already the solution for the special case of the pure gravitational field. The theory we are looking for must therefore be a generalization of the theory of the gravitational field. The first question is: What is the natural generalization of the symmetrical tensor field?

This question cannot be answered by itself, but only in connection with the other question: What generalization of the field is going to provide the most natural theoretical system? The answer on which the theory under discussion is based is that the symmetrical tensor field must be replaced by a non~symmetrical one. This means that the condition gik = gki for the field components must be dropped. In that case the field has sixteen instead of ten independent components.

There remains the task of setting up the relativistic differential equations for a non~symmetrical tensor field. In the attempt to solve this problem one meets with a difficulty that does not arise in the case of the symmetrical field. The principle of general relativity does not suffice to determine completely the field equations, mainly because the transformation law of the symmetrical part of the field alone does not involve the components of the anti~symmetrical part or vice versa. Probably this is the reason that this kind of generalization of the field has been hardly ever tried before. The combination of the two parts of the field can only be shown to be a natural procedure if in the formalism of the theory only the total field plays a role, and not the symmetrical and anti~symmetrical parts separately.

It turned out that this requirement can actively be satisfied in a natural way. Nonetheless, even this requirement, together with the principle of general relativity, is still not sufficient to determine uniquely the field equations. Let us remember that the system of equations must satisfy a further condition: the equations must be compatible. It has been mentioned above that this condition is satisfied if the equations can be derived from a variational principle.

This has rightfully been achieved, although not in so natural a way as in the case of the symmetrical field. It has been disturbing to find that it can be achieved in two different ways. These variational principles furnished two systems of equations~let us denote them by E1 and E2~which were different from each other (although only so), each of them exhibiting specific imperfections. Consequently even the condition of compatibility was insufficient to determine the system of equations uniquely.

It was, in fact, the formal defects of the systems E1 and E2 out whom indicated a possible way. There exists a third system of equations, E3, which is free of the formal defects of the systems E1 and E2 and represents a combination of them in the sense that every solution of E3 is a solution of E1 as well as of E2. This suggests that E3 may be the system for which we have been looking. Why not postulate E3, then, as the system of equations? Such a procedure is not justified without further analysis, since the compatibility of E1 and that of E2 does not imply compatibility of the stronger system E3, where the number of equations exceeds the number of field components by four.

An independent consideration shows that irrespective of the question of compatibility the stronger system, E3, is the only really natural generalization of the equations of gravitation.

It seems, nonetheless, that E3 is not a compatible system in the same sense as are the systems E1 and E2, whose compatibility is assured by a sufficient number of identities, which means that every field that satisfies the equations for a definite value of the time has a continuous extension representing a solution in four~dimensional space. The system E3, however, is not extensible in the same way. Using the language of classical mechanics, we might say: In the case of the system E3 the ‘initial condition’ cannot be freely chosen. What really matter is the answer to the question: Is the manifold of solutions for the system E3 as extensive as must be required for a physical theory? This purely mathematical problem is as yet unsolved.

The skeptic will say: ‘It may be true that this system of equations is reasonable from a logical standpoint. However, this does not prove that it corresponds to nature.’ You are right, dear skeptic. Experience alone can decide on truth. Yet we have achieved something if we have succeeded in formulating a meaningful and precise question. Affirmation or refutation will not be easy, in spite of an abundance of known empirical facts. The derivation, from the equations, of conclusions that can be confronted with experience will require painstaking efforts and probably new mathematical methods.

Schrödinger's mathematical description of electron waves found immediate acceptance. The mathematical description matched what scientists had learned about electrons by observing them and their effects. In 1925, a year before Schrödinger published his results, German-British physicist Max Born and German physicist Werner Heisenberg developed a mathematical system called matrix mechanics. Matrix mechanics also succeeded in describing the structure of the atom, but it was totally theoretical. It gave no picture of the atom that physicists could verify observationally. Schrödinger's vindication of de Broglie's idea of electron waves immediately overturned matrix mechanics, though later physicists showed that wave mechanics are equivalent to matrix mechanics.

To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determinations of Newton’s laws of motion and the theory of electromagnetism.

The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.

The new math is new only in that the material is introduced at a much lower level than heretofore. Thus geometry, which was and is commonly taught in the second year of high school, is now frequently introduced, in an elementary fashion, in the fourth grade~in fact, naming and recognition of the common geometric figures, the circle and the square, occurs in kindergarten. At an early stage, numbers are identified with points on a line, and the identification is used to introduce, much earlier than in the traditional curriculum, negative numbers and the arithmetic processes involving them.

The elements of set theory constitute the most basic and perhaps the most important topic of the new math. Even a kindergarten child can understand, without formal definition, the meaning of a set of red blocks, the set of fingers on the left hand, and the set of the child’s ears and eyes. The technical word set is merely a synonym for many common words that designate an aggregate of elements. The child can understand that the set of fingers on the left hand and the set on the right~hand match~that is, the elements, fingers, can be put into a one-to-one correspondence. The set of fingers on the left hand and the set of the child’s ears and eyes do not match. Some concepts that are developed by this method are counting, equality of number, more than, and less then. The ideas of union and intersection of sets and the complement of a set can be similarly developed without formal definition in the early grades. The principles and formalism of set theory are extended as the child advances; upon graduation from high school, the student’s knowledge is quite comprehensive.

The amount of new math and the particular topics taught vary from school to school. In addition to set theory and intuitive geometry, the material is usually chosen from the following topics: a development of the number systems, including methods of numeration, binary and other bases of notation, and modular arithmetic; measurement, with attention to accuracy and precision, and error study; studies of algebraic systems, including linear algebra, modern algebra, vectors, and matrices, with an axiomatically delegated approach; logic, including truth tables, the nature of proof, Venn or Euler diagrams, relations, functions, and general axiomatic; probability and statistics; linear programming; computer programming and language; and analytic geometry and calculus. Some schools present differential equations, topology, and real and complex analysis.

Cosmology, of an evolution, is the study of the general nature of the universe in space and in time~what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.

The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds~Russian named Nikolai Lobachevski and a Hungarian named János Bolyai~discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self~consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.

To illustrate the differences between Euclidean geometry and their non~Euclidean system considering just two dimensions are simplest~that is, the geometry of surfaces. In our schoolbooks this is known as ‘plane geometry’, because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two~dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative’, which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we will define a ‘straight line’, equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic’.

Considering a surface curved like a saddle, we find that, given a ‘straight’ line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel. The possible parallels to the line fall within certain limits, indicated by the intersecting lines.

As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle~shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. Still, on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameter.

After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non~Euclidean geometry whose two~dimensional model is a surface of positive, rather than negative, curvature~that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.

Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat’, as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two~dimensional creatures living in a two~dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three~dimensional beings living in three~dimensional space, in that we should be capably able by way of studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he proceeded to apply Riemann's formulas to test his idea.

Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space~time continuum’, by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number Á, or me, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.

In his special theory of relativity Einstein made the geometry of the time~space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four~dimensional space~time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of forces that impel the earth to deviate from straight~line motion and to move in a circle around the sun’. Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space~time continuum in its neighbourhood’.

The motion of an object in the space~time continuum can be represented by a curve called the object's ‘world line’. Einstein declared, in effect: ‘The world line of the earth is a geodesic trajectory in the curved four~dimensional space around the sun’. In other words, the . . . earth’s ‘world line’ . . . corresponds to the shortest four~dimensional distance between the position of the earth in January . . . and its position in October . . .

Einstein's idea of the gravitational curvature of space~time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large~scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out’ its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.

Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two~dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?

Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun) and to gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).

Einstein's new force, called ‘cosmic repulsion’, allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as, Einstein's spherical universe, gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space~time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two~dimensional analogy of Einstein's space~time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.

The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time. Unhappily astronomical observations contradicted by both Einstein and de Sitter's static models of the universe, and they were soon abandoned.

In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One depiction as afforded by the efforts as drawn upon the imagination can see that the universe as expanding with time, others, by contrast, are less neuronally excited and cannot see beyond any celestial attempt for looking.

Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: ‘The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field equations, (three~dimensional) space, without enlarging these field equations. Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder that he ever made in his entire life

Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the red~shift as the Doppler effect~the well~known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' red~shift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart. Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom’. (Modern physicists would prefer the term ‘primeval nucleus’.) As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we now know it to be.

Not until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five~billion~year~old rocks! Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.

Friedman's solution of Einstein's cosmological equation, permits two kinds of universe. We can call one the ‘pulsating’ universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million times denser than water; that it will then begin to expand again~and so on through the cycle ad infinitum. The other model is a ‘hyperbolic’ one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.

The question whether our universe is ‘pulsating’ or ‘hyperbolic’ should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second~the ‘escape velocity’~the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, . . . and so on.) On the other hand, a rockets shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies: the rocket and the earth, we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.

Thus we may conclude that our universe corresponds to the ‘hyperbolic’ model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust.

Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative~closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three~dimensional space, just as we examined the properties of figures on two~dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius.

We saw that, in the two~dimensional case, the area of a circle increases with increasing radius at a faster rate on a negatively curved surface than on a Euclidean or flat surface; and that on a positively curved surface the relative rate of increase is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in radius. In negatively curved space the volume would increase faster than this, in undisputably curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature~closed and finite.

Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubble's calculations produced the conclusion that the universe is a closed system~a small universe only a few billion light~years in radius.

We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. Nevertheless, there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past~500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light~years away cannot be put on the same distance scale with a galaxy 500 million light~years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.

When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are nonetheless drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. Even so, if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.

Effectively there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever~expanding. Mathematical solutions of fundamental cosmological equations indicate that such a universe is open and infinite.

We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four~dimensional space~time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. All the same, in 1951 a group at the University of Cambridge, whose chief official representative has been Fred Hoyle, advanced the steady~state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. However, the issue is many~sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it . . . Thus coming to summarize the evolutionary theory.

We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well~known equation E=mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus, we can speak of the ‘mass density’ of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. Yet in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.

Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250~millionth birthday.

This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviours. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. Nevertheless, as soon as matter became gravitationally more important than the radiant energy, it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls’, the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A.B.E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans's mathematical formula for the process to the gas filling the universe at that time, in that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies’~cold, dark and chaotic. However, their gas soon condensed into stars and formed the galaxies as we see them now.

A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., the chemical elements . . . Its belief is that at the start matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about thirty minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element~building or been converted to protons.

To many, the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. However, consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million~million microseconds later, the site is still ‘hot’ with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?

The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. Hold to opinion, that some of them were built by capture of neutrons. However, since the absence of any stable nucleus of atomic weight five makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, and, yet agreeing that the lion's share of the heavy elements may have been formed later in the hot interiors of stars.

All the theories~of the origin, age, extent, composition and nature of the universe~are becoming more subject to test by new instruments and new techniques . . . Nevertheless, we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies diminish in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe.

In addition certain branches of physical science focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X~rays~forms of energy that are closely related and that all obey the same set of rules. Chemistry is the study of the composition of matter and the way different substances interact~subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with fewer harmful side effects.

The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology, one of the fastest-growing sciences today.

Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens usually, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today

The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.

Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes finds in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.

While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.

Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life, the fact that most living things maintain a steady internal state when the environment around them constantly changes.

Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.

As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.

The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.

Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share. That there are those that are the products of some non~regional culture, in that have been taught by others in sharing their knowledge as given up from generation to generation.

The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well in technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.

Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.

In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine. Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.

During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. Even so, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.

The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.

Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared. The Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.

Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of a pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc. shows that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians~a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.

For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. Yet in ancient Greece, often recognized as the birthplace of Western science, a new scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.

Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.

Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit~not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.

As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens~the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle~students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.

In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing the figure of an accurate overlay within one percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.

By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in AD. 529, bringing the first flowering of rationalism to an end.

For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300's, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.

Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about AD. 270; wood~block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of π (pi) to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940 Bc.

The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al-Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in thirty. Al-Khwarizmi also wrote on algebra (it derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.

In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used~alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.

In Europe, historians often attribute the rebirth of science to a political event~the capture of Constantinople (now Istanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.

The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. However, in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, seven volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.

The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer. In it, Copernicus rejected the idea that Earth was the centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion that has dogged Western thought ever since

In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.

These observations of Venus helped to convince Galileo that Copernicus’s Sun-entered view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.

Nicolaus Copernicus (1473~1543), the first developed heliocentric theory of the Universes in the modern era presented in De Revolutioniv bus Coelestium, published in the year of Copernicus’s death. The system is entirely mathematical, in the sense of predicting the observed position of celestial bodies on te basis of an underlying geometry, without exploring the mechanics of celestial motion. Its mathematical and scientific superiority over the Ptolemaic system was not as direct as poplar history suggests: Copernicus’s system adhered to circular planetary motion and let the planets run on forty~eight epicycles and eccentrics. It was not until the work of Kepler and Galileo that the system became markedly simpler than Ptolemaic astronomy.

The publication of Nicolaus Copernicus's De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres) in 1543 is traditionally considered the inauguration of the scientific revolution. Ironically, Copernicus had no intention of introducing radical ideas into cosmology. His aim was only to restore the purity of ancient Greek astronomy by eliminating novelties introduced by Ptolemy. With such an aim in mind he modelled his own book, which would turn astronomy upside down, on Ptolemy's Almagest. At the core of the Copernican system, as with that of Aristarchus before him, is the concept of the stationary Sun at the centre of the universe, and the revolution of the planets, Earth included, around the Sun. The Earth was ascribed, in addition to an annual revolution around the Sun, a daily rotation around its axis.

Copernicus's greatest achievement is his legacy. By introducing mathematical reasoning into cosmology, he dealt a severe blow to Aristotelian commonsense physics. His concept of an Earth in motion launched the notion of the Earth as a planet. His explanation that he had been unable to detect stellar parallax because of the enormous distance of the sphere of the fixed stars opened the way for future speculation about an infinite universe. Nevertheless, Copernicus still clung to many traditional features of Aristotelian cosmology. He continued to advocate the entrenched view of the universe as a closed world and to see the motion of the planets as uniform and circular. Thus, in evaluating Copernicus's legacy, it should be noted that he set the stage for far more daring speculations than he himself could make. The heavy metaphysical underpinning of Kepler's laws, combined with an obscure style and a demanding mathematics, caused most contemporaries to ignore his discoveries. Even his Italian contemporary Galileo Galilei, who corresponded with Kepler and possessed his books, never referred to the three laws. Instead, Galileo provided the two important elements missing from Kepler's work: a new science of dynamics that could be employed in an explanation of planetary motion, and a staggering new body of astronomical observations. The observations were made possible by the invention of the telescope in Holland C.1608 and by Galileo's ability to improve on this instrument without having ever seen the original. Thus equipped, he turned his telescope skyward, and saw some spectacular sights.

The results of his discoveries were immediately published in the Sidereus nuncius (The Starry Messenger) of 1610. Galileo observed that the Moon was very similar to the Earth, with mountains, valleys, and oceans, and not at all that perfect, smooth spherical body it was claimed to be. He also discovered four moons orbiting Jupiter. As for the Milky Way, instead of being a stream of light, it was, alternatively a large aggregate of stars. Later observations resulted in the discovery of sunspots, the phases of Venus, and that strange phenomenon that would later be designated as the rings of Saturn.

Having announced these sensational astronomical discoveries which reinforced his conviction of the reality of the heliocentric theory~Galileo resumed his earlier studies of motion. He now attempted to construct a comprehensive new science of mechanics necessary in a Copernican world, and the results of his labours were published in Italian in two epoch

~ making books: Dialogue Concerning the Two Chief World Systems (1632) and Discourses and Mathematical Demonstrations Concerning the Two New Sciences (1638). His studies of projectiles and free~falling bodies brought him very close to the full formulation of the laws of inertia and acceleration (the first two laws of Isaac Newton). Galileo's legacy includes both the modern notion of ‘laws of nature’ and the idea of mathematics as nature's true language. He contributed to the mathematization of nature and the geometrization of space, as well as to the mechanical philosophy that would dominate the 17th and 18th centuries. Perhaps most important, it is largely due to Galileo that experiments and observations serve as the cornerstone of scientific reasoning.

Today, Galileo is remembered equally well because of his conflict with the Roman Catholic church. His uncompromising advocacy of Copernicanism after 1610 was responsible, in part, for the placement of Copernicus's De Revolutionibus on the Index of Forbidden Books in 1616. At the same time, Galileo was warned not to teach or defend Copernicanism in public. The election of Galileo's friend Maffeo Barbering as Pope Urban VIII in 1624 filled Galileo with the hope that such a verdict could be revoked. With perhaps some unwarranted optimism, Galileo set to work to complete his Dialogue (1632). However, Galileo underestimated the power of the enemies he had made during the previous two decades, particularly some Jesuits who had been the target of his acerbic tongue. The outcome was that Galileo was summoned to Rome and there forced to abjure, on his knees, the views he had expressed in his book. Ever since, Galileo has been portrayed as a victim of a repressive church and a martyr in the cause of freedom of thought; as such, he has become a powerful symbol.

Despite his passionate advocacy of Copernicanism and his fundamental work in mechanics, Galileo continued to accept the age~old views that planetary orbits were circular and the cosmos an enclosed world. These beliefs, as well as a reluctance rigorously to apply mathematics to astronomy as he had previously applied it to terrestrial mechanics, prevented him from arriving at the correct law of inertia. Thus, it remained for Isaac Newton to unite heaven and Earth in his immense intellectual achievement, the Philosophiae Naturalis principia mathematica (Mathematical Principles of Natural Philosophy), which was published in 1687. The first book of the Principia contained Newton's three laws of motion. The first expounds the law of inertia: everybody persists in a state of rest or uniform motion in a straight line unless compelled to change such a state by an impressing force. The second is the law of acceleration, according to which the change of motion of a body is proportional to the force acting upon it and takes place in the direction of the straight line along which that force is impressed. The third, and most original, law ascribes to every action an opposite and equal reaction. These laws governing terrestrial motion were extended to include celestial motion in book three of the Principia, where Newton formulated his most famous law, the law of gravitation: everybody in the universe attracts any other body with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them.

The Principia is deservedly considered one of the greatest scientific masterpieces of all time. Nevertheless, in 1704, Newton published his second great work, the Opticks, in which he formulated his corpuscular theory of light and his theory of colours. In later editions Newton appended a series of ‘queries’ concerning various related topics in natural philosophy. These speculative, and sometimes metaphysical, statements on such issues as light, heat, ether, and matter became most productive during the 18th century, when the book and the experimental method it propagated became immensely popular.

The 17th century French scientist and mathematician René Descartes was also one of the most influential thinkers in Western philosophy. Descartes stressed the importance of skepticism in thought and proposed the idea that existence had a dual nature: one physical, the other mental. The latter concept, known as Cartesian dualism, continues to engage philosophers today. This passage from Discourse on Method (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase ‘I think, therefore I am.’

Then examining attentively what I was, and seeing that I could pretend that I had no body and that there was no world or place that I [was] in, but that I could not, for all that, pretend that I did not exist, and that, on the contrary, from the very fact that I thought of doubting the truth of other things, it followed very evidently and very conveniently that I existed; while, on the other hand, if I had only ceased to think, although all the rest of what I had ever imagined had been true, I would have had no reason to believe that I existed; I thereby concluded that I was a substance, of which the whole essence or nature consists in thinking, and which, in order to exist, needs no place and depends on no material thing; so that this ‘I’, which is to say, the mind, by which I am what I am, is distinct entirely from the body, and even that knowing is easier than the body, and moreover that even if the body were not, it would not cease to be all that it is.

It is, nonetheless, as considered overall of what is needed for a proposition to be true and certain; for, I had morally justified, in finding of one that so happens that I knew it to be so. I thought too, that I had morally justified by reason alone, in that to know of what is of this necessitates a narrative coherence as availed to a set~order of governing principles. Having marked and noted that there is nothing in at all that in this, I think, therefore I am, which assures me that I am speaking the truth, except that I see very clearly that in order to think one must exist, I judged that I could take it to be a general rule that the things we conceive very clearly and very distinctly is nevertheless some difficulty in being able to recognize for certain that are the things we see distinctly.

Following this, reflecting on the fact that I had doubts, and that consequently my being was not perfect, for I saw clearly that it was a greater perfection to know than to doubt, I decided to inquire from what place I had learned to think of some thing perfect than myself; and I clearly recognized that this must have been from some nature that was in fact perfect. As for the notions I had of several other things outside myself, such as the sky, the earth, light, heat and a thousand others, I had not the same concern to know their source, because, seeing nothing in them that seemed to make them superior to me. I could believe that, if they were true, they were dependencies of my nature, in as much as it. One perfection; and, if they were not, that I held them from nothing, that is to say that they were in me because of an imperfection in my nature. Nevertheless, I could not make the same judgement concerning the idea of a being perfect than myself; for to hold it from nothing was something manifestly impossible; and because it is no less contradictory that the perfect should proceed from and depend on the less perfect, than it is that something should emerge out of nothing, I could not hold it from myself; with the result that it remained that it must have been put into me by a being whose nature was truly perfect than mine and which even had in it all the perfection of which I could have any idea, which is to say, in a word, which was God. To which I added that, since I knew some perfections that I did not have, I was not the only being that existed (I will freely use here, with your permission, the terms of the School) but that there must be another perfect, upon whom I depended, and from whom I had acquired all I had; for, if I had been alone and independent of all other, so as to have had from myself this small portion of perfection that I had by participation in the perfection of God, I could have given myself, by the same reason, all the remainder of perfection that I knew myself to lack, and thus to be myself infinite, eternal, immutable, omniscient, all powerful, and finally to have all the perfections that I could observe to be in God. For, consequentially upon the reasoning by which I had proved the existence of God, in order to understand the nature of God as far as my own nature was capable of doing, I had only to consider, concerning all the things of which I found in myself some idea, whether it was a perfection or not to have them: and I was assured that none of those that indicated some imperfection was in him, but that all the others were. So I saw that doubt, inconstancy, sadness and similar things could not be in him, seeing that I myself would have been very pleased to be free from them. Then, further, I had ideas of many sensible and bodily things; for even supposing that I was dreaming, and that everything I saw or imagined was false, I could not, nevertheless, deny that the ideas were really in my thoughts. However, because I had already recognized in myself very clearly that intelligent nature is distinct from the corporeal, considering that all composition is evidence of dependency, and that dependency is manifestly a defect, I thence judged that it could not be a perfection in God to be composed of these two natures, and that, consequently, he was not so composed, but that, if there were any bodies in the world or any intelligence or other natures that were not wholly perfect, their existence must depend on his power, in such a way that they could not subsist without him for a single instant.

I set out after that to seek other truths; and turning to the object of the geometers [geometry], which I conceived as a continuous body, or a space extended indefinitely in length, width and height or depth, divisible into various parts, which could have various figures and sizes and be moved or transposed in all sorts of ways~for the geometers take all that to be in the object of their study~I went through some of their simplest proofs. Having observed that the great certainty that everyone attributes to them is based only on the fact that they are clearly conceived according to the rule I spoke of earlier, I noticed also that they had nothing at all in them that might assure me of the existence of their object. Thus, for example, I very well perceived that, supposing a triangle to be given, its three angles must be equal to two right~angles, but I saw nothing, for all that, which assured me that any such triangle existed in the world, whereas regressing to the examination of the idea I had of a perfect Being. In that of its finding it was found that existence was comprised in the idea in the same way that the equality of the three angles of a triangle to two right angles is comprised in the idea of a triangle or, as in the idea of a sphere, the fact that all its parts are equidistant from its centre, or even more obviously so; and that consequently it is at least as certain that God, who is this perfect Being, is, or exists, as any geometric demonstration can be.

The impact of the Newtonian accomplishment was enormous. Newton's two great books resulted in the establishment of two traditions that, though often mutually exclusive, nevertheless permeated into every area of science. The first was the mathematical and reductionist tradition of the Principia, which, like René Descartes's mechanical philosophy, propagated a rational, well~regulated image of the universe. The second was the experimental tradition of the Opticks, in a measure less demanding than the mathematical tradition and, owing to the speculative and suggestive queries appended to the Opticks, highly applicable to chemistry, biology, and the other new scientific disciplines that began to flourish in the 18th century. This is not to imply that everyone in the scientific establishment was, or would be, a Newtonian. Newtonianism had its share of detractors. Instead, the Newtonian achievement was so great, and its applicability to other disciplines so strong, that although Newtonian science could be argued against, it could not be ignored. In fact, in the physical sciences an initial reaction against universal gravitation occurred. For many, the concept of action at a distance seemed to hark back to those occult qualities with which the mechanical philosophy of the 17th century had done away. By the second half of the 18th century, however, universal gravitation would be proved correct, thanks to the work of Leonhard Euler, A. C. Clairaut, and Pierre Simon de LaPlace, the last of whom announced the stability of the solar system in his masterpiece Celestial Mechanics (1799~1825).

Newton's influence was not confined to the domain of the natural sciences. The philosophes of the 18th~century Enlightenment sought to apply scientific methods to the study of human society. To them, the empiricist philosopher John Locke was the first person to attempt this. They believed that in his Essay on Human Understanding (1690) Locke did for the human mind what Newton had done for the physical world. Although Locke's psychology and epistemology were to come under increasing attack as the 18th century advanced, other thinkers such as Adam Smith, David Hume, and Abbé de Condillac would aspire to become the Newtons of the mind or the moral realm. These confident, optimistic men of the Enlightenment argued that there must exist universal human laws that transcend differences of human behaviour and the variety of social and cultural institutions. Labouring under such an assumption, they sought to uncover these laws and apply them to the new society about which they hoped to bring.

As the 18th century progressed, the optimism of the philosophes waned and a reaction began to set in. Its first manifestation occurred in the religious realm. The mechanistic interpretation of the world~shared by Newton and Descartes ~had, in the hands of the philosophes, led to materialism and atheism. Thus, by mid~century the stage was set for a revivalist movement, which took the form of Methodism in England and pietism in Germany. By the end of the century the romantic reaction had begun. Fuelled in part by religious revivalism, the romantics attacked the extreme rationalism of the Enlightenment, the impersonalization of the mechanistic universe, and the contemptuous attitude of ‘mathematicians’ toward imagination, emotions, and religion.

The romantic reaction, however, was not anti~scientific; its adherents rejected a specific type of the mathematical science, not the entire enterprise. In fact, the romantic reaction, particularly in Germany, would give rise to a creative movement~the Naturphilosophie ~that in turn would be crucial for the development of the biological and life sciences in the 19th century, and would nourish the metaphysical foundation necessary for the emergence of the concepts of energy, forces, and conservation.

Thus and so, in classical physics, external reality consisted of inert and inanimate matter moving in accordance with wholly deterministic natural laws, and collections of discrete atomized parts constituted wholes. Classical physics was also premised, however, on a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate from and superior to sensible objects and movements. The motion that the material world experienced by the senses was inferior to the immaterial world experiences by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. Nevertheless, in one very important respect it also made the fist scientific revolution possible. Copernicus, Galileo, Kepler and Newton firmly believed that the immaterial geometrical mathematical ides that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.

Even though instruction at Cambridge was still dominated by the philosophy of Aristotle, some freedom of study was permitted in the student's third year. Newton immersed himself in the new mechanical philosophy of Descartes, Gassendi, and Boyle; in the new algebra and analytical geometry of Vieta, Descartes, and Wallis; and in the mechanics and Copernican astronomy of Galileo. At this stage Newton showed no great talent. His scientific genius emerged suddenly when the plague closed the University in the summer of 1665 and he had to return to Lincolnshire. There, within eighteen months he began revolutionary advances in mathematics, optics, physics, and astronomy.

During the plague years Newton laid the foundation for elementary differential and integral Calculus, several years before its independent discovery by the German philosopher and mathematician Leibniz. The ‘method of fluxions’, as he termed it, was based on his crucial insight that the integration of a function (or finding the area under its curve) is merely the inverse procedure to differentiating it (or finding the slope of the curve at any point). Taking differentiation as the basic operation, Newton produced simple analytical methods that unified a host of disparate techniques previously developed on a piecemeal basis to deal with such problems as finding areas, tangents, the lengths of curves, and their maxima and minima. Even though Newton could not fully justify his methods ~rigorous logical foundations for the calculus were not developed until the 19th century~he receives the credit for developing a powerful tool of problem solving and analysis in pure mathematics and physics. Isaac Barrow, a Fellow of Trinity College and Lucasian Professor of Mathematics in the University, was so impressed by Newton's achievement that when he resigned his chair in 1669 to devote himself to theology, he recommended that the 27~year~old Newton take his place.

Newton's initial lectures as Lucasian Professor dealt with optics, including his remarkable discoveries made during the plague years. He had reached the revolutionary conclusion that white light is not a simple, homogeneous entity, as natural philosophers since Aristotle had believed. When he passed a thin beam of sunlight through a glass prism, he noted the oblong spectrum of colours~red, yellow, green, blue, violet ~that formed on the wall opposite. Newton showed that the spectrum was too long to be explained by the accepted theory of the bending (or refraction) of light by dense media. The old theory said that all rays of white light striking the prism at the same angle would be equally refracted. Newton argued that white light is really a mixture of many different types of rays, that the different types of rays are refracted at different angles, and that each different type of ray is responsible for producing a given spectral colour. A so~called crucial experiment confirmed the theory. Newton selected out of the spectrum a narrow band of light of one colour. He sent it through a second prism and observed that no further elongation occurred. All the selected rays of one colour were refracted at the same angle.

These discoveries led Newton to the logical, but erroneous, conclusion that telescopes using refracting lenses could never overcome the distortions of chromatic dispersion. He therefore proposed and constructed a reflecting telescope, the first of its kind, and the prototype of the largest modern optical telescopes. In 1671 he donated an improved version to the Royal Society of London, the foremost scientific society of the day. As a consequence, he was elected a fellow of the society in 1672. Later that year Newton published his first scientific paper in the Philosophical Transactions of the society. It dealt with the new theory of light and colour and is one of the earliest examples of the short research paper.

Newton's paper was well received, but two leading natural philosophers, Robert Hooke and Christian Huygens rejected Newton's naive claim that his theory was simply derived with certainty from experiments. In particular they objected to what they took to be Newton's attempt to prove by experiment alone that light consists in the motion of small particles, or corpuscles, rather than in the transmission of waves or pulses, as they both believed. Although Newton's subsequent denial of the use of hypotheses was not convincing, his ideas about scientific method won universal assent, along with his corpuscular theory, which reigned until the wave theory was revived in the early 19th century.

The debate soured Newton's relations with Hooke. Newton withdrew from public scientific discussion for about a decade after 1675, devoting himself to chemical and alchemical researches. He delayed the publication of a full account of his optical researches until after the death of Hooke in 1703. Newton's Opticks appeared the following year. It dealt with the theory of light and colour and with Newton's investigations of the colours of thin sheets, of ‘Newton's rings’, and of the phenomenon of diffraction of light. To explain some of his observations he had to graft elements of a wave theory of light onto his basically corpuscular theory. q

Newton's greatest achievement was his work in physics and celestial mechanics, which culminated in the theory of universal gravitation. Even though Newton also began this research in the plague years, the story that he discovered universal gravitation in 1666 while watching an apple fall from a tree in his garden is a myth. By 1666, Newton had formulated early versions of his three Laws of motion. He had also discovered the law stating the centrifugal force (or force away from the centre) of a body moving uniformly in a circular path. However, he still believed that the earth's gravity and the motions of the planets might be caused by the action of whirlpools, or vortices, of small corpuscles, as Descartes had claimed. Moreover, although he knew the law of centrifugal force, he did not have a correct understanding of the mechanics of circular motion. He thought of circular motion as the result of a balance between two forces. One centrifugal, the other centripetal (toward the centre)~than as the result of one force, a centripetal force, which constantly deflects the body away from its inertial path in a straight line.

Newton's great insight of 1666 was to imagine that the Earth's gravity extended to the Moon, counterbalancing its centrifugal force. From his law of centrifugal force and Kepler's third law of planetary motion, Newton deduced that the centrifugal (and hence centripetal) forces of the Moon or of any planet must decrease as the inverse square of its distance from the centre of its motion. For example, if the distance is doubled, the force becomes one~fourth as much; if distance is trebled, the force becomes one~ninth as much. This theory agreed with Newton's data to within about 11%.

In 1679, Newton returned to his study of celestial mechanics when his adversary Hooke drew him into a discussion of the problem of orbital motion. Hooke is credited with suggesting to Newton that circular motion arises from the centripetal deflection of inertially moving bodies. Hooke further conjectured that since the planets move in ellipses with the Sun at one focus (Kepler's first law), the centripetal force drawing them to the Sun should vary as the inverse square of their distances from it. Hooke could not prove this theory mathematically, although he boasted that he could. Not to be shown up by his rival, Newton applied his mathematical talents to proving Hooke's conjecture. He showed that if a body obeys Kepler's second law (which states that the line joining a planet to the sun sweeps out equal areas in equal times), then the body is being acted upon by a centripetal force. This discovery revealed for the first time the physical significance of Kepler's second law. Given this discovery, Newton succeeded in showing that a body moving in an elliptical path and attracted to one focus must truly be drawn by a force that varies as the inverse square of the distance. Later even these results were set aside by Newton.

In 1684 the young astronomer Edmond Halley, tired of Hooke's fruitless boasting, asked Newton whether he could prove Hooke's conjecture and to his surprise was told that Newton had solved the problem a full five years before but had now mislaid the paper. At Halley's constant urging Newton reproduced the proofs and expanded them into a paper on the laws of motion and problems of orbital mechanics. Finally Halley persuaded Newton to compose a full~length treatment of his new physics and its application to astronomy. After eighteen months of sustained effort, Newton published (1687) the Philosophiae Naturalis principia Mathematica (The Mathematical Principles of Natural Philosophy), or Principia, as it is universally known.

By common consent the Principia is the greatest scientific book ever written. Within the framework of an infinite, homogeneous, three~dimensional, empty space and a uniformly and eternally flowing ‘absolute’ time, Newton fully analysed the motion of bodies in resisting and nonresisting media under the action of centripetal forces. The results were applied to orbiting bodies, projectiles, pendula, and free~fall near the Earth. He further demonstrated that the planets were attracted toward the Sun by a force varying as the inverse square of the distance and generalized that all heavenly bodies mutually attract one another. By further generalization, he reached his law of universal gravitation: every piece of matter attracts every other piece with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.

Given the law of gravitation and the laws of motion, Newton could explain a wide range of hitherto disparate phenomena such as the eccentric orbits of comets, the causes of the tides and their major variations, the precession of the Earth's axis, and the perturbation of the motion of the Moon by the gravity of the Sun. Newton's one general law of nature and one system of mechanics reduced to order most of the known problems of astronomy and terrestrial physics. The work of Galileo, Copernicus, and Kepler was united and transformed into one coherent scientific theory. The new Copernican world~picture finally had a firm physical basis.

Because Newton repeatedly used the term ‘attraction’ in the Principia, mechanical philosophers attacked him for reintroducing into science the idea that mere matter could act at a distance upon other matter. Newton replied that he had only intended to show the existence of gravitational attraction and to discover its mathematical law, not to inquire into its cause. He no more than his critics believed that brute matter could act at a distance. Having rejected the Cartesian vortices, he reverted in the early 1700s to the idea that some material medium, or ether, caused gravity. However, Newton's ether was no longer a Cartesian~type ether acting solely by impacts among particles. The ether had to be extremely rare so it would not obstruct the motions of the planets, and yet very elastic or springy so it could push large masses toward one another. Newton postulated that the new ether consisted of particles endowed with very powerful short~range repulsive forces. His unreconciled ideas on forces and ether deeply influenced later natural philosophers in the 18th century when they turned to the phenomena of chemistry, electricity and magnetism, and physiology.

With the publication of the Principia, Newton was recognized as the leading natural philosopher of the age, but his creative career was effectively over. After suffering a nervous breakdown in 1693, he retired from research to seek a government position in London. In 1696 he became Warden of the Royal Mint and in 1699 its Master, an extremely lucrative position. He oversaw the great English recoinage of the 1690s and pursued counterfeiters with ferocity. In 1703 he was elected president of the Royal Society and was reelected each year until his death. He was knighted (1708) by Queen Anne, the first scientist to be so honoured for his work.

As any overt appeal to metaphysics became unfashionable, the science of mechanics was increasingly regarded, says Ivor Leclerc, as ‘an autonomous science,’ and any alleged role of God as ‘deus ex machina’. At the beginning of the nineteenth century, Pierre~Simon LaPlace, along with a number of other great French mathematicians and, advanced the view that the science of mechanics constituted a complex view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God as, they concluded unnecessary.

Pierre de Simon LaPlace (1749~1827) is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well. The epistemology of science requires, had that we proceeded by inductive generalisations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena.’ What was unique out LaPlace’s view of hypotheses as insistence that we cannot attribute reality to them. Although concepts like force, mass, notion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths abut nature are only quantities.

The seventeenth~century view of physics s a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was: The science of nature. This view, which was premised on the doctrine e of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical descriptions. Since the doctrine of positivism, assumed that the knowledge we call physics resides only in the mathematical formalisms of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth~century metaphysical assumption about the relationship between physical reality and physical theory.

So, then, the decision was motivated by our conviction that our discoveries have more potential to transform our conception of the ‘way thing are’ than any previous discovery in the history of science, as these implications of discovery extend well beyond the domain of the physical sciences, and the best efforts of large numbers of thoughtfully convincing in others than I will be required to understand them.

In fewer contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.

Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.

However, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.

Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.

Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-held notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.

By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.

With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.

In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions~a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.

Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.

In 1828 the German chemist Friedrich Wöhler showed that making carbon~containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.

In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he carried as a process, led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's development of the electromagnetic theory of light took many years. It began with the paper ‘On Faraday's Lines of Force’ (1855–1856), in which Maxwell built on the ideas of British physicist Michael Faraday. Faraday explained that electric and magnetic effects result from lines of forces that surround conductors and magnets. Maxwell drew an analogy between the behaviour of the lines of force and the flow of a liquid, deriving equations that represented electric and magnetic effects. The next step toward Maxwell’s electromagnetic theory was the publication of the paper, On Physical Lines of Force (1861~1862). Here Maxwell developed a model for the medium that could carry electric and magnetic effects. He devised a hypothetical medium that consisted of a fluid in which magnetic effects created whirlpool-like structures. These whirlpools were separated by cells created by electric effects, so the combination of magnetic and electric effects formed a honeycomb pattern.

Maxwell could explain all known effects of electromagnetism by considering how the motion of the whirlpools, or vortices, and cells could produce magnetic and electric effects. He showed that the lines of force behave like the structures in the hypothetical fluid. Maxwell went further, considering what would happen if the fluid could change density, or be elastic. The movement of a charge would set up a disturbance in an elastic medium, forming waves that would move through the medium. The speed of these waves would be equal to the ratio of the value for an electric current measured in electrostatic units to the value of the same current measured in electromagnetic units. German physicists Friedrich Kohlrausch and Wilhelm Weber had calculated this ratio and found it the same as the speed of light. Maxwell inferred that light consists of waves in the same medium that causes electric and magnetic phenomena.

Maxwell found supporting evidence for this inference in work he did on defining basic electrical and magnetic quantities in terms of mass, length, and time. In the paper, On the Elementary Regulations of Electric Quantities (1863), he wrote that the ratio of the two definitions of any quantity based on electric and magnetic forces is always equal to the velocity of light. He considered that light must consist of electromagnetic waves but first needed to prove this by abandoning the vortex analogy and developing a mathematical system. He achieved this in ‘A Dynamical Theory of the Electromagnetic Field’ (1864), in which he developed the fundamental equations that describe the electromagnetic field. These equations showed that light is propagated in two waves, one magnetic and the other electric, which vibrate perpendicular to each other and perpendicular to the direction in which they are moving (like a wave travelling along a string). Maxwell first published this solution in Note on the Electromagnetic Theory of Light (1868) and summed up all of his work on electricity and magnetism in Treatise on Electricity and Magnetism in 1873.

The treatise also suggested that a whole family of electromagnetic radiation must exist, of which visible light was only one part. In 1888 German physicist Heinrich Hertz made the sensational discovery of radio waves, a form of electromagnetic radiation with wavelengths too long for our eyes to see, confirming Maxwell’s ideas. Unfortunately, Maxwell did not live long enough to see this vindication of his work. He also did not live to see the ether (the medium in which light waves were said to be propagated) disproved with the classic experiments of German-born American physicist Albert Michelson and American chemist Edward Morley in 1881 and 1887. Maxwell had suggested an experiment much like the Michelson-Morley experiment in the last year of his life. Although Maxwell believed the ether existed, his equations were not dependent on its existence, and so remained valid.

Maxwell's other major contribution to physics was to provide a mathematical basis for the kinetic theory of gases, which explains that gases behave as they do because they are composed of particles in constant motion. Maxwell built on the achievements of German physicist Rudolf Clausius, who in 1857 and 1858 had shown that a gas must consist of molecules in constant motion colliding with each other and with the walls of their container. Clausius developed the idea of the mean free path, which is the average distance that a molecule travels between collisions.

Maxwell's development of the kinetic theory of gases was stimulated by his success in the similar problem of Saturn's rings. It dates from 1860, when he used a statistical treatment to express the wide range of velocities (speeds and the directions of the speeds) that the molecules in a quantity of gas must inevitably possess. He arrived at a formula to express the distribution of velocity in gas molecules, relating it to temperature. He showed that gases store heat in the motion of their molecules, so the molecules in a gas will speed up as the gasses temperature increases. Maxwell then applied his theory with some success to viscosity (how much a gas resists movement), diffusion (how gas molecules move from an area of higher concentration to an area of lower concentration), and other properties of gases that depend on the nature of the molecules’ motion.

Maxwell's kinetic theory did not fully explain heat conduction (how heat travels through a gas). Austrian physicist Ludwig Boltzmann modified Maxwell’s theory in 1868, resulting in the Maxwell-Boltzmann distribution law, showing the number of particles (n) having an energy (E) in a system of particles in thermal equilibrium. It has the form:

n = n0 exp(~E/kT),

where n0 is the number of particles having the lowest energy, ‘k’ the Boltzmann constant, and ‘T’ the thermodynamic temperature.

If the particles can only have certain fixed energies, such as the energy levels of atoms, the formula gives the number (Ei) above the ground state energy. In certain cases several distinct states may have the same energy and the formula then becomes:

ni = gin0 exp(~Ki/kT),

where (g)i is the statistical weight of the level of energy ‘Ei’,

i.e., the number of states having energy Ei. The distribution of energies obtained by the formula is called a Boltzmann distribution.

Both Maxwell’ s thermodynamic relational equations and the Boltzmann formulation to a contributional successive succession of refinements of kinetic theory, and it proved fully applicable to all properties of gases. It also led Maxwell to an accurate estimate of the size of molecules and to a method of separating gases in a centrifuge. The kinetic theory was derived using statistics, so it also revised opinions on the validity of the second law of thermodynamics, which states that heat cannot flow from a colder to a hotter body of its own accord. In the case of two connected containers of gases at the same temperature, it is statistically possible for the molecules to diffuse so that the faster-moving molecules all concentrate in one container while the slower molecules gather in the other, making the first container hotter and the second colder. Maxwell conceived this hypothesis, which is known as Maxwell's demon. Although this event is very unlikely, it is possible, and the second law is therefore not absolute, but highly probable.

These sources provide additional information on James Maxwell Clerk: Maxwell is generally considered the greatest theoretical physicist of the 1800s. He combined a rigorous mathematical ability with great insight, which enabled him to make brilliant advances in the two most important areas of physics at that time. In building on Faraday's work to discover the electromagnetic nature of light, Maxwell not only explained electromagnetism but also paved the way for the discovery and application of the whole spectrum of electromagnetic radiation that has characterized modern physics. Physicists now know that this spectrum also includes radio, infrared, ultraviolet, and X-ray waves, to name a few. In developing the kinetic theory of gases, Maxwell gave the final proof that the nature of heat resides in the motion of molecules.

With Maxwell's famous equations, as devised in 1864, uses mathematics to explain the intersaction between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well.

With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X~rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-held notion that atoms were the basic unit of matter.

As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.

In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimated ranges may be as far as from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescopes, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.

In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.

Pasteur’s work on fermentation and spontaneous generation had considerable implications for medicine, because he believed that the origin and development of disease are analogous to the origin and process of fermentation. That is, disease arises from germs attacking the body from outside, just as unwanted microorganisms invade milk and cause fermentation. This concept, called the germ theory of disease, was strongly debated by physicians and scientists around the world. One of the main arguments against it was the contention that the role germs played during the course of disease was secondary and unimportant; the notion that tiny organisms could kill vastly larger ones seemed ridiculous to many people. Pasteur’s studies convinced him that he was right, however, and in the course of his career he extended the germ theory to explain the causes of many diseases.

Pasteur also determined the natural history of anthrax, a fatal disease of cattle. He proved that anthrax is caused by a particular bacillus and suggested that animals could be given anthrax in a mild form by vaccinating them with attenuated (weakened) bacilli, thus providing immunity from potentially fatal attacks. In order to prove his theory, Pasteur began by inoculating twenty~five sheep; a few days later he inoculated these and twenty~five more sheep with an especially strong inoculant, and he left ten sheep untreated. He predicted that the second twenty~five sheep would all perish and concluded the experiment dramatically by showing, to a sceptical crowd, the carcasses of the twenty~five sheep lying side by side.

Pasteur spent the rest of his life working on the causes of various diseases, including septicaemia, cholera, diphtheria, fowl cholera, tuberculosis, and smallpox~and their prevention by means of vaccination. He is best known for his investigations concerning the prevention of rabies, otherwise known in humans as hydrophobia. After experimenting with the saliva of animals suffering from this disease, Pasteur concluded that the disease rests in the nerve centres of the body; when an extract from the spinal column of a rabid dog was injected into the bodies of healthy animals, symptoms of rabies were produced. By studying the tissues of infected animals, particularly rabbits, Pasteur was able to develop an attenuated form of the virus that could be used for inoculation.

In 1885, a young boy and his mother arrived at Pasteur’s laboratory; the boy had been bitten badly by a rabid dog, and Pasteur was urged to treat him with his new method. At the end of the treatment, which lasted ten days, the boy was being inoculated with the most potent rabies virus known; he recovered and remained healthy. Since that time, thousands of people have been saved from rabies by this treatment.

Pasteur’s research on rabies resulted, in 1888, in the founding of a special institute in Paris for the treatment of the disease. This became known as the Instituted Pasteur, and it was directed by Pasteur himself until he died. (The institute still flourishes and is one of the most important centres in the world for the study of infectious diseases and other subjects related to microorganisms, including molecular genetics.) By the time of his death in Saint-Cloud on September 28, 1895, Pasteur had long since become a national hero and had been honoured in many ways. He was given a state funeral at the Cathedral of Nôtre Dame, and his body was placed in a permanent crypt in his institute.

Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. Nevertheless, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that until it has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.

In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.

At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940's American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.

After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, sidestepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.

At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than ten a year by the 21st century. By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. Nevertheless, by the 1980's the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause haemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.

In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertion of normal or genetically an altered gene into a patient’s cells replaces nonfunctional or missing genes.

Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fiberoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as ‘telemedicine’, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.

In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind’. In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.

The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.

In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.

In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.

During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second

Further miniaturization led in 1971 to the first microprocessor~a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950's. Once used only by large businesses, computers are now used by professionals, small retailers, and students to complete a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to understand each other with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.

During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.

When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960's NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960's and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.

In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.

In 1900 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off in set amounts, or quanta. Five years later, German-born American physicist Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein's special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.

Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known~an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927, whereby, the principle, that the product of the uncertainty in measured value of a component of momentum (pχ) and the uncertainty in the corresponding co~ordinates of (χ) is of the equivalent set~order of magnitude, as the Planck constant. In its most precise form:

Δp2 x Δχ ≥ h/4π

where Δχ represents the root~mean~square value of the uncertainty. For mot purposes one can assume:

Δpχ x Δχ = h/2π

the principle can be derived exactly from quantum mechanics, a physical theory that grew out of Planck’s quantum theory and deals with the mechanics of atomic and related systems in terms of quantities that an be measured mathematical forms, including ‘wave mechanics’ (Schrödinger) and ‘matrix mechanics’ (Born and Heisenberg), all of which are equivalent.

Nonetheless, it is most easily understood as a consequence of the fact that any measurement of a system mist disturbs the system under investigation, with a resulting lack of precision in measurement. For example, if seeing an electron was possible and thus measures its position, photons would have to be reflected from the electron. If a single photon could be used and detected with a microscope, the collision between the electron and photon would change the electron’s momentum, as to its effectuality Compton Effect as a result to wavelengths of the photon is increased by an amount Δλ, whereby:

Δλ = (2h/m0c) sin2 ½ φ.

This is the Compton equation, h is the Planck constant, m0 the rest mass of the particle, 'c' the speed of light, and φ the angle between the directions of the incident and scattered photon. The quantity h/m0c is known as the Compton wavelength, symbol: λC, to which for an electron is equal to 0.002 43 nm.

A similar relationship applies to the determination of energy and time, thus:

ΔE x Δt ≥ h/4π.

The effects of the uncertainty principle are not apparent with large systems because of the small size of h. However, the principle is of fundamental importance in the behaviour of systems on the atomic scale. For example, the principle explains the inherent width of spectral lines, if the lifetime of an atom in an excited state is very short there is a large uncertainty in its energy and line resulting from a transition is broad.

One consequence of the uncertainty principle is that predicting the behaviour of a system and the macroscopic principle of causality cannot apply at the atomic level is impossible fully. Quantum mechanics give a statistical description of the behaviour of physical systems.

Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world, that is, the one in which we live.

In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both energy sources.

These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of twelve fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.

Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between ten and twenty billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.

Apart from their assimilations affiliated within the paradigms of science, Descartes was to posit the existence of two categorically different domains of existence for immaterial ideas~the res extensa and the res cognitans or the ‘extended substance’ and the ‘thinking substance. Descartes defined the extended substance as the realm of physical reality within primary mathematical and geometrical forms resides and thinking substance as the realm of human subjective reality. Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a lap of faith~God constructed the world, said Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering in their pristine essence. The truth of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth~century metaphysical presupposition that became in the history of science what we term the ‘hidden ontology of classical epistemology.’

While classical epistemology would serve the progress of science very well, It also presented us with a terrible dilemma about the relationship between ‘mind’ and the ‘world’. If there is no real or necessary correspondence between non~mathematical ideas in subjective reality and external physical reality, how do we now that the world in which we live, breath, and have our Being, then perish in so that we undeniably exist? Descartes’s resolution of this dilemma took the form of an exercise. He asked us to direct our attention inward and to divest our consciousness of all awareness of eternal physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.

As it turned out, this resolution was considerably more problematic and oppressive than Descartes could have imaged. ‘I think, Therefore, I am’ may be a marginally persuasive way of confirming the real existence e of the thinking self. However, the understanding of physical reality that obliged Descartes and others to doubt the existence of this self implied that the separation between the subjective world, or the world of life, and the real world of physical reality was ‘absolute.’

Our propped new understanding of the relationship between mind and world is framed within the larger context of the history of mathematical physics, the organs and extensions of the classical view of the foundations of scientific knowledge, and the various ways that physicists have attempted to obviate previous challenge s to the efficacy of classical epistemology, this was made so, as to serve as background for a new relationship between parts nd wholes in quantum physics, as well as similar view of the relationship that had emerged in the so~called ‘new biology’ and in recent studies of the evolution of modern humans.

Nevertheless, at the end of such as this arduous journey lie two conclusions that should make possible that first, there is no basis in contemporary physics or biology for believing in the stark Cartesian division between mind and world, that some have alternatively given to describe as ‘the disease of the Western mind’. Secondly, there is a new basis for dialogue between two cultures that are now badly divided and very much un need of an enlarged sense of common understanding and shared purpose; let us briefly consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by classical physics and formalized by Descartes.

The first scientific revolution of the seventeenth century freed Western civilization from the paralysing and demeaning forces of superstition, laid the foundations for rational understanding and control of the processes of nature, and ushered in an era of technological innovation and progress that provided untold benefits for humanity. Nevertheless, as classical physics progressively dissolved the distinction between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life.

Philosophy, quickly realized that there was nothing in tis view of nature that could explain o provide a foundation for the mental, or for all that we know from direct experience cas distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led to invent ‘algebraic geometry’.

A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and him also claimed that the contours of physical reality could be laid out in three~dimensional co~ordinates. Following the publication of Isaac Newton’s Principia Mathematica. In 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world would be known and mastered though the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.

Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting~point or foundation on the basis alone of which progress is possible. This is the method of investigating the extent of knowledge and its basis in reason or experience, it attempts to put knowledge upon a secure formation by first inviting us to suspend judgement on any proposition whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. The process is eventually dramatized in the figure of the evil~demon, or malin génie, whose aim is to deceive us, so that our sense, memories, and seasonings lead us astray. The task then becomes one of finding a demon~proof points of certainty, and Descartes produces this in the famous ‘Cogito ergo sum’, I think therefore I am’. It is on this slender basis that the correct use of our faculties has to be reestablished, but it seems as though Descartes has denied himself any materials to use in reconstructing the edifice of knowledge. He has a basis, but any way of building on it without invoking principles tat will not be demon~proof, and so will not meet the standards he had apparently set himself. It vis possible to interpret him as using ‘clear and distinct ideas’ to prove the existence of God, whose benevolence then justifies our use of clear and distinct ideas (‘God is no deceiver’): This is the notorious Cartesian circle. Descartes’s own attitude to this problem is not quite clear, at timers he seems more concerned with providing a stable body of knowledge, that our natural faculties will endorse, rather than one that meets the more severe standards with which he starts. For example, in the second set of Replies he shrugs off the possibility of ‘absolute falsity’ of our natural system of belief, in favour of our right to retain ‘any conviction so firm that it is quite incapable of being destroyed’. The need to add such natural belief to anything certified by reason Events eventually the cornerstone of Hume ‘s philosophy, and the basis of most 20th~century reactionism, to the method of doubt.

In his own time Rene Descartes’ conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects, although the position is associated especially with Malebrallium, it is much older, many among the Islamic philosophies, their processes for adducing philosophical proofs to justify elements of religious doctrine. It plays the parallel role in Islam to that which scholastic philosophy played in the development of Christianity. The practitioners of kalam were known as the Mutakallimun. It also gives rise to the problem, insoluble in its own terms, of ‘other minds’. Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of th problem.

In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses., since we can conceive of the nature of a ‘ball of wax’ surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought here is reflected in Leibniz’s view, as held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).

Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility all contrive to make him the central point of reference for modern philosophy.

It seems, nonetheless, that the radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about is spiritual dimension or ontological foundations. In the meantime, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps te most cental feature of Western intellectual life.

Philosophers in the like of John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of mater with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean~Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternity’ are the guiding principles of this consciousness. Rousseau also made godlike the ideas o the ‘general will’ of the people to achieve these goals and declare that those who do not conform to this will were social deviants.

Evenhandedly, Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a measure more different in form by the nineteenth~century Romantics in Germany, England, and the United Sates. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi~scientific musing. In Goethe’s attempt to wed mind and matter, nature became a mindful agency that ‘loves illusion’. Shrouds man in mist, ‘ presses him to her heart’, and punishes those who fail to see the ‘light’. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unifies mind and matter is progressively moving toward self~realization and undivided wholeness.

Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things ~bodies and minds~are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.

For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being might have awaken that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources that in turn affect the brain, affecting mental states. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind~body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th~century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, ‘I must find a truth that is true for me . . . the idea for which I can live or die.’ Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th~century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.

All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th~century French philosopher Jean~Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that recognizing that one experiences is spiritually crucial not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th~century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th~century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th~century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a ‘leap of faith’ into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life~affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the ‘Death of God’ and went on to reject the entire Judeo~Christian moral tradition in favour of a heroic pagan ideal.

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis~in this case the phenomenology of the 20th~century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a ‘futile passion’. Sartre nevertheless insisted that his existentialism be a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th~century theology. The 20th~century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buyer inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th~century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self~destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879~80), ‘We must love life more than the meaning of it.’

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the th eater of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely rational explanations of the division between subjective reality and external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche. After declaring that God and ‘divine will’ do not exist, Nietzsche reified the ‘essences’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all pervious philosophical attempts to articulate the ‘will to truth’. The problem, claimed Nietzsche, is that earlier versions of the ‘will to power’ disguise the fact that all allege truths were arbitrarily created in the subjective reality of the individual and are expression or manifestations of individual ‘will’.

In Nietzsche’s view, the separation between mind and mater is more absolute and total than had previously been imagined. Based on the assumption that there is no real or necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language’. The prison as he conceived it, however, it was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new massage of individual existence founded on will.

Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to educe mind to a mere material substance, and thereby to displace or subsume the separateness and uniqueness of mind with mechanistic description that disallow any basis for te free exerciser of individual will.

Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulful mechanistic inverse proved terribly influential on twentieth~century thought. Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Though a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics, to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.

Friedrich Nietzsche is openly pessimistic about the possibility of knowledge. ‘We simply lack any organ for knowledge, for ‘truth’: we know (or believe or imagine) just as much as may be useful in the interests of the human herd, the species: and even what is called ‘utility’ is ultimately also a mere belief, something imaginary and perhaps precisely that most calamitous stupidity of which we will not perish some day’ (The Gay Science).

This position is very radical, Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: he writes that we think we know what we think is useful, and that we can be quite wrong about the latter.

Nietzsche’s view, his ‘Perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if hey were to constitute knowledge. He sum up this highly controversial position in The Will to Power: ‘Facts are precisely what there is not. Only interpretation’.

It is often claimed that Perspectivism is self~undermining. If the thesis that all views are interpretations is true then, it is argued there is at least one view that is not an interpretation. If, on the other hand, the thesis is itself an interpretation, then there is no reason to believe that it is true, and it follows again that nit every view is an interpretation.

Yet this refutation assume that if a view, like Perspectivism itself, is an interpretation it is wrong. This is not the case. To call any view, including Perspectivism, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the Perspectivism is literally false producing another view superior to it on specific epistemological grounds is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of cotemporary anti~realism, it attributes to specific approaches truth in relation t o facts specified internally those approaches themselves. Bu t it refuses to envisage a single independent set of facts, To be accounted for by all theories. Thus Nietzsche grants the truth of specific scientific theories does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’ (The Gay Science): Neither t h fact science addresses nor the methods it employs are privileged. Scientific theories serve the purposes for which hey have been devised, but these have no priority over the many other purposes of human life. The existence of many purposes and needs relative to which the value of theories is established~another crucial element of Perspectivism is sometimes thought to imply a reason relative, according to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of single set of standards for determining epistemic value, but holds that specific views can be compared with and evaluated in relation to one another the ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in all. Agreement is therefore not always possible, since individuals may sometimes differ over the most fundamental issues dividing them.

Still, Nietzsche would not be troubled by this fact, which his opponents too also have to confront only he would argue, to suppress it by insisting on the hope that all disagreements are in particular eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal. He considers irresoluble disagreement and essential part of human life.

Knowledge for Nietzsche is again material, but now based on desire and bodily needs more than social refinements Perspectives are to be judged not from their relation to the absolute but on the basis of their effects in a specific era. The possibility of any truth beyond such a local, pragmatic one becomes a problem in Nietzsche, since either a noumenal realm or an historical synthesis exists to provide an absolute criterion of adjudication for competing truth claims: what get called truths are simply beliefs that have been for so long that we have forgotten their genealogy? In this Nietzsche reverses the Enlightenment dictum that truth is the way to liberation by suggesting that trying classes in as far as they are considered absolute for debate and conceptual progress and cause as opposed to any ambient behaviour toward the ease of which backwardness and unnecessary misery. Nietzsche moves back and forth without revolution between the positing of trans~histories; truth claims, such as his claim about the will to power, and a kind of epistemic nihilism that calls into question not only the possibility of truth but the need and desire of it as well. However, perhaps what is most important, Nietzsche introduces the notion that truth is a kind of human practice, in a game whose rules are contingent rather than necessary it. The evaluation of truth claims should be based of their strategic efforts, not their ability to represent a reality conceived of as separate as of an autonomous of human influence, for Nietzsche the view that all truth is truth from or within a particular perspective. The perspective may be a general human pin of view, set by such things as the nature of our sensory apparatus, or it may be thought to be bound by culture, history, language, class or gender. Since there may be many perspectives, there are also different families of truth. The term is frequently applied to, of course Nietzsche’s philosophy.

The best~known disciples of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean~Paul Sartre. The work of Husserl, Heidegger and Sartre became foundational to that of the principle architects of philosophical postmodernism, the deconstructionists Jacques Lacan, Roland Bathes, Michel Foucault and Jacques Derrida, this direct linkage among the nineteenth~century crises about epistemological foundations of physics and the origins of philosophical postmodernism served to perpetuate the Cartesian two~world dilemma in an even more oppressive form

Of Sartre’s main philosophical work, Being and Nothingness, Sartre examines the relationships between Being For~itself (consciousness) and Being In~itself (the non~conscious world). He rejects central tenets of the rationalalist and empiricist traditions, calling the view that the mind or self is a thing or substance. ‘Descartes’s substantialist illusion’, and claiming also that consciousness dos not contain ideas or representations . . . are idolist invented by the psychologists. Sartre also attacks idealism in the forms associated with Berkeley and Kant’s, and concludes that his account of the relationship between consciousness and the world is neither realist nor idealist.

Sartre also discusses Being For~others, which comprises the aspects of experience about interactions with other minds.. His views are subtle: roughly, he holds that one’s awareness of others is constituted by feelings of shame, pride, and so on.

Sartre’s rejection of ideas, and the denial of idealism, appear to commit him to direct realism in the theory of perception. This is neither inconsistent with his claim as been non~realist nor idealist, since by ‘realist’ he means views that allow for the mutual independence or in~principle separability of mind and world. Against this Sartre emphasizes, after Heidegger, that perceptual experience has an active dimension, in hat it is a way of interacting and dealing with the world, than a way of merely contemplating it (‘activity, as spontaneous, unreflecting consciousness, constitutes a certain existential stratum in the world’). Consequently, he holds that experience is richer, and open to more aspects of the world, than empiricist writers customarily claim:

When I run after a streetcar . . . there is consciousness of~the~streetcar~having~to~be~overtaken, etc., . . . I am then plunged into the world of objects, it is they that constitute the unity of my consciousness, it is they that present themselves with values, with attractive nd repellent qualities . . .

Relatedly, he insists that I experience material things as having certain potentialities ~for~me (’nothingness’). I see doors and bottles as openable, bicycles as ridable (these matters are linked ultimately to the doctrine of extreme existentialist freedom). Similarly, if my friend is not where I expect to meet her, then I experience her absence ‘as a real event’.

These Phenomenological claims are striking and compelling, but Sartre pay insufficient attention to such things as illusions and hallucinations, which are normally cited as problems for direct realists. In his discussion of mental imagery, however, he describes the act of imaging as a ‘transformation’ of ‘psychic material’. This connects with his view that even a physical image such as a photograph of a tree does not figure as an object of consciousness when it is experienced as a tree~representation (than as a piece of coloured cards). Nonetheless, the fact remains that the photograph continues to contribute to the character of the experience. Given this, seeing how Sartre avoids positing a mental analogue of a photograph for episodes of mental imaging is hard, and harder still to reconcile this with his rejection of visual representations. If ones image is regarded as debased and the awareness of awakening is formally received as a differential coefficient of perceptual knowledge, but this merely rises once more the issue of perceptual illusion and hallucination, and the problem of reconciling them are dialectally the formalization built upon realism.

Much of Western religion and philosophical thought since the seventeenth century has sought to obviate this prospect with an appeal to ontology or to some conception of God or Being. Yet we continue to struggle, as philosophical postmodernism attests, with the terrible prospect by Nietzsche~we are locked in a prison house of our individual subjective realities in a universe that is as alien to our thought as it is to our desires. This universe may seem comprehensible and knowable in scientific terms, and science does seek in some sense, as Koyré puts it, to ‘find a place for everything.’ Nonetheless, the ghost of Descartes lingers in the widespread conviction that science does not provide a ‘place for man’ or for all that we know as distinctly human in subjective reality.

Nonetheless, after The Gay Science (1882) began the crucial exploration of self~mastery. The relations between reason and power, and the revelation of the unconscious striving after power that provide the actual energy for the apparent self~denial of the ascetic and the martyred was during this period that Nietzsche’s failed relationship with Lou Salome resulted in the emotional crisis from which Also sprach Zarathustra (1883~5, trans., as Thus Spoke Zarathustra) signals a recovery. This work is frequently regarded as Nietzsche’s masterpiece. It was followed by Jenseits von Gut and Böse (1887), trans., as Beyond Good and Evil); Zur Genealogie der Moral (1887, trans., as, The Genealogy of Moral.)

In Thus Spake Zarathustra (1883~85), Friedrich Nietzsche introduced in eloquent poetic prose the concepts of the death of God, the superman, and the will to power. Vigorously attacking Christianity and democracy as moralities for the ‘weak herd’, he argued for the ‘natural aristocracy’ of the superman who, driven by the ‘will to power’, celebrates life on earth rather than sanctifying it for some heavenly reward. Such a heroic man of merit has the courage to ‘live dangerously’ and thus rise above the masses, developing his natural capacity for the creative use of passion.

Also known as radical theology, this movement flourished in the mid 1960s. As a theological movement it never attracted a large following, did not find a unified expression, and passed off the scene as quickly and dramatically as it had arisen. There is even disagreement as to whom its major representatives were. Some identify two, and others three or four. Although small, the movement attracted attention because it was a spectacular symptom of the bankruptcy of modern theology and because it was a journalistic phenomenon. The very statement ‘God is dead’ was tailor~made for journalistic exploitation. The representatives of the movement effectively used periodical articles, paperback books, and the electronic media. This movement gave expression to an idea that had been incipient in Western philosophy and theology for some time, the suggestion that the reality of a transcendent God at best could not be known and at worst did not exist at all. Philosopher Kant’s and theologian Ritschl denied that one could have a theoretical knowledge of the being of God. Hume and the empiricist for all practical purposes restricted knowledge and reality to the material world as perceived by the five senses. Since God was not empirically verifiable, the biblical world view was said to be mythological and unacceptable to the modern mind. Such atheistic existentialist philosophers as Nietzsche despaired even of the search of God; it was he who coined the phrase ‘God is dead’ almost a century before the death of God theologians.

Mid~twentieth century theologians not associated with the movement also contributed to the climate of opinion out of which death of God theology emerged. Rudolf Bultmann regarded all elements of the supernaturalistic, theistic world view as mythological and proposed that Scripture be demythologized so that it could speak its message to the modern person.

Paul Tillich, an avowed anti supernaturalist, said that the only nonsymbiotic statement that could be made about God was that he was being itself. He is beyond essence and existence; therefore, to argue that God exists is to deny him. It is more appropriate to say God does not exist. At best Tillich was a pantheist, but his thought borders on atheism. Dietrich Bonhoeffer (whether rightly understood or not) also contributed to the climate of opinion with some fragmentary but tantalizing statements preserved in Letters and Papers from Prison. He wrote of the world and man ‘coming of age’, of ‘religionless Christianity’, of the ‘world without God’, and of getting rid of the ‘God of the gaps’ and getting along just as well as before. It is not always certain what Bonhoeffer meant, but if nothing else, he provided a vocabulary that later radical theologians could exploit.

It is clear, then, that as startling as the idea of the death of God was when proclaimed in the mid 1960s, it did not represent as radical a departure from recent philosophical and theological ideas and vocabulary as might superficially appear.

Just what was death of God theology? The answers are as varied as those who proclaimed God's demise. Since Nietzsche, theologians had occasionally used ‘God is dead’ to express the fact that for an increasing number of people in the modern age God seems to be unreal. Nonetheless, the idea of God's death began to have special prominence in 1957 when Gabriel Vahanian published a book entitled God is Dead. Vahanian did not offer a systematic expression of death of God theology. Instead, he analysed those historical elements that contributed to the masses of people accepting atheism not so much as a theory but as a way of life. Vahanian himself did not believe that God was dead. Still, he urged that there be a form of Christianity that would recognize the contemporary loss of God and exert its influence through what was left. Other proponents of the death of God had the same assessment of God's status in contemporary culture, but were to draw different conclusions.

Thomas J. J. Altizer believed that God had really died. Nonetheless, Altizer often spoke in exaggerated and dialectic language, occasionally with heavy overtones of Oriental mysticism. Sometimes knowing exactly what Altizer meant when he spoke in dialectical opposites is difficult such as ‘God is dead, thank God’ Apparently the real meaning of Altizer's belief that God had died is to be found in his belief in God's immanence. To say that God has died is to say that he has ceased to exist as a transcendent, supernatural being. Alternately, he has become fully immanent in the world. The result is an essential identity between the human and the divine. God died in Christ in this sense, and the process has continued time and again since then. Altizer claims the church tried to give God life again and put him back in heaven by its doctrines of resurrection and ascension. However, the traditional doctrines about God and Christ must be repudiated because man has discovered after nineteen centuries that God does not exist. Christians must even now will the death of God by which the transcendent becomes immanent.

For William Hamilton the death of God describes the event many have experienced over the last two hundred years. They no longer accept the reality of God or the meaningfulness of language about him. non theistic explanations have been substituted for theistic ones. This trend is irreversible, and everyone must come to terms with the historical~cultural ~death of God. God's death must be affirmed and the secular world embraced as normative intellectually and good ethically. Doubtlessly, Hamilton was optimistic about the world, because he was optimistic about what humanity could do and was doing to solve its problems.

Paul van Buren is usually associated with death of God theology, although he himself disavowed this connection. Yet, his disavowal seems hollow in the light of his book The Secular Meaning of the Gospel and his article ‘Christian Education Post Mortem Dei.’ In the former he accepts empiricism and the position of Bultmann that the world view of the Bible is mythological and untenable to modern people. In the latter he proposes an approach to Christian education that does not assume the existence of God but does assume ‘the death of God’ and that ‘God is gone’.

Van Buren was concerned with the linguistic aspects of God's existence and death. He accepted the premise of empirical analytic philosophy that real knowledge and meaning can be conveyed only by language that is empirically verifiable. This is the fundamental principle of modern secularists and is the only viable option in this age. If only empirically verifiable language is meaningful, ipso facto all language that refers to or assumes the reality of God is meaningless, since one cannot verify God's existence by any of the five senses. Theism, belief in God, is not only intellectually untenable, it is meaningless. In The Secular Meaning of the Gospel van Buren seeks to reinterpret the Christian faith without reference to God. One searches the book in vain for even one clue that van Buren is anything but a secularist trying to translate Christian ethical values into that language game. There is a decided shift in van Buren's later book Discerning the Way, however.

In retrospect, there was clearly no single death of God theology, only death of God theologies. Their real significance was that modern theologies, by giving up the essential elements of Christian belief in God, had logically led to what were really antitheologies. When the death of God theologies passed off the scene, the commitment to secularism remained and manifested itself in other forms of secular theology in the late 1960s and the 1970s.

Nietzsche is unchallenged as the most insightful and powerful critic of the moral climate of the 19th century (and of what of it remains in ours). His exploration of unconscious motivation anticipated Freud. He is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes when it is denied its basis in action, and the corruptions of human nature encouraged by religion, such as Christianity, that feed on such resentment. Yet the powerful human beings who escapes all this, the Ubermensch, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style to his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist rather than the warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave himself no words to condemn any uncaged beasts of pre y who best find their style by exerting repulsive power find their style by exerting repulsive power over others. This problem is no t helped by Nietzsche’s frequently expressed misogyny, although in such matters the interpretation of his many~layered and ironic writings is no always straightforward. Similarly y, such

Anti~Semitism as has been found in his work is balanced by an equally vehement denunciation of anti~Semitism, and an equal or greater dislike of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of will, but more deeply from his scepticism about the notions of truth and act. In particular, he anticipated any of the central tenets of postmodernism: an aesthetic attitude toward the world that sees it as a ‘text’; the denial of facts; the denial of essences; the celebration of the plurality of interpretation and of the fragmented self, as well as the downgrading of reason and the politicization of discourse. All awaited rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his Perspectivism is echoed in the shifting array of literary devices~humour, irony, exaggeration, aphorisms, verse, dialogue, parody~with that he explores human life and history.

Yet, it is nonetheless, that we have seen, the origins of the present division that can be traced to the emergence of classical physics and the stark Cartesian division between mind and bodily world are two separate substances, the self is as it happened associated with a particular body, but is self~subsisting, and capable of independent existence, yet Cartesian duality, much as the ‘ego’ that we are tempted to imagine as a simple unique thing that makes up our essential identity, but, seemingly sanctioned by this physics. The tragedy of the Western mind, well represented in the work of a host of writers, artists, and intellectual, is that the Cartesian division was perceived as uncontrovertibly real.

Beginning with Nietzsche, those who wished to free the realm of the mental from the oppressive implications of the mechanistic world~view sought to undermine the alleged privileged character of the knowledge called physicians with an attack on its epistemological authority. After Husserl tried and failed to save the classical view of correspondence by grounding the logic of mathematical systems in human consciousness, this not only resulted in a view of human consciousness that became characteristically postmodern. It also represents a direct link with the epistemological crisis about the foundations of logic and number in the late nineteenth century that foreshadowed the epistemological crisis occasioned by quantum physics beginning in the 1920's. This, as a result in disparate views on the existence of oncology and the character of scientific knowledge that fuelled the conflict between the two.

If there were world enough and time enough, the conflict between each that both could be viewed as an interesting artifact in the richly diverse coordinative systems of higher education. Nevertheless, as the ecological crisis teaches us, the ‘old enough’ capable of sustaining the growing number of our life firms and the ‘time enough’ that remains to reduce and reverse the damage we are inflicting on this world ae rapidly diminishing. Therefore, put an end to the absurd ‘betweeness’ and go on with the business of coordinate human knowledge in the interest of human survival in a new age of enlightenment that could be far more humane and much more enlightened than any has gone before.

It now, which it is, nonetheless, that there are significant advances in our understanding to a purposive mind. Cognitive science is an interdisciplinary approach to cognition that draws primarily on ideas from cognitive psychology, artificial intelligence, linguistics and logic. Some philosophers may be cognitive scientists, and others concern themselves with the philosophy of cognitive psychology and cognitive science. Since inauguration of cognitive science these disciplines have attracted much attention from certain philosophers of mind. This has changed the character of philosophy of mind, and there are areas where philosophical work on the nature of mind is continuous with scientific work. Yet, the problems that make up this field concern the ways of ‘thinking’ and ‘mental properties’ are those that these problems are standardly and traditionally regarded within philosophy of mind than those that emerge from the recent developments in cognitive science. The cognitive aspect is what has to be understood is to know what would make the sentence true or false. It is frequently identified with the truth cognition of the sentence. Justly as the scientific study of precesses of awareness, thought, and mental organization, often by means of a computer modelling or artificial intelligence research. Contradicted by the evidence, it only has to do with is structure and the way it functioned, that is just because a theory does not mean that the scientific community currently accredits it. Generally, there are many theories, though technically scientific, have been rejected because the scientific evidence is strangely against it. The historical enquiry into the evolution of self~consciousness, developing from elementary sense experience to fully rational, free, thought processes capable of yielding knowledge the presented term, is associated with the work and school of Husserl. Following Brentano, Husserl realized that intentionality was the distinctive mark of consciousness, and saw in it a concept capable of overcoming traditional mind~body dualism. The stud y of consciousness, therefore, maintains two sides: a conscious experience can be regarded as an element in a stream of consciousness, but also as a representative of one aspect or ‘profile’ of an object. In spite of Husserl’s rejection of dualism, his belief that there is a subject~matter remaining after epoch or bracketing of the content of experience, associates him with the priority accorded to elementary experiences in the parallel doctrine of phenomenalism, and phenomenology has partly suffered from the eclipse of that approach to problems of experience and reality. However, later phenomenologists such as Merleau~Ponty do full justice to the world~involving nature of Phenomenological theories are empirical generalizations of data experience, or manifest in experience. More generally, the phenomenal aspects of things are the aspects that show themselves, than the theoretical aspects that are inferred or posited in order to account for them. They merely described the recurring process of nature and do not refer to their cause or that, in the words of J.S. Mill, ‘objects are the permanent possibilities of sensation’. To inhabit a world of independent, external objects is, on this view, to be the subject of actual and possible orderly experiences. Espoused by Russell, the view issued in a programme of translating talk about physical objects and their locations into talking about possible experience. The attempt is widely supposed to have failed, and the priority the approach gives to experience has been much criticized. It is more common in contemporary philosophy to see experience as itself a construct from the actual way of the world, than the other way round.

Phenomenological theories are also called ‘scientific laws’ ‘physical laws’ and ‘natural laws.’ Newton’s third law is one example. It says that every action ha an equal and opposite reaction. ‘Explanatory theories’ attempt to explain the observations rather than generalized them. Whereas laws are descriptions of empirical regularities, explanatory theories are conceptual constrictions to explain why the data exit, for example, atomic theory explains why we see certain observations, the same could be said with DNA and relativity, Explanatory theories are particularly helpful in such cases where the entities (like atoms, DNA . . . ) cannot be directly observed.

What is knowledge? How does knowledge get to have the content it has? The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts begun with Plato, in that knowledge is true belief plus logos, as it is what enables us to apprehend the principle and firms, i.e., an aspect of our own reasoning.

What makes a belief justified for what measures of belief is knowledge? According to most epistemologists, knowledge entails belief, so that to know that such and such is the case. None less, there are arguments against all versions of the thesis that knowledge requires having a belief~like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief or facsimile, are mutually incompatible (the incompatibility thesis) or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis). The incompatibility thesis that hinged on the equation of knowledge with certainty. The assumption that we believe in the truth of claim we are not certain about its truth. Given that belief always involves uncertainty, while knowledge never does, believing something rules out the possibility of knowledge knowing it. Again, given to no reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest otherwise, that we cease to believe things about which we are completely confident is bizarre.

A. D. Woozley (1953) defends a version of the separability thesis. Woozley’s version that deal with psychological certainty than belief per se, is that knowledge can exist without confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley says, ‘what I can Do, where what I can do may include answering questions.’ on the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people that correct responses on examinations if those people show no confidence in their answers. Woozley acknowledges however, that it would be odd for those who lack confidence to claim knowledge. Saying it would be peculiar, ‘I know it is correct.’ but this tension; still ‘I know is correct.’ Woozley explains, using a distinction between condition under which are justified in making a claim (such as a claim to know something) and conditioned under which the claim we make is true. While ‘I know such and such’ might be true even if I answered whether such and such holds, nonetheless claiming that ‘I know that such should be inappropriate for me and such unless I was sure of the truth of my claim.’

Since Feuerbach there has been a growing tendency for philosophy of religion either to concentrate upon the social and anthropological dimension of religious belief, or to treat a manifestation of various explicable psychological urges. Another reaction is retreat into a celebration of purely subjective existential commitments. Still, the ontological arguments continue to attach attention. A modern anti~fundamentalists trends in epistemology are not entirely hostile to cognitive claims based on religious experience.

Still, the problem f reconciling the subjective or psychological nature of mental life with its objective and logical content preoccupied from of which is next of the problem was elephantine Logische untersuchungen (trans. as Logical Investigations, 1070). To keep a subjective and a naturalistic approach to knowledge together. Abandoning the naturalism in favour of a kind of transcendental idealism. The precise nature of his change is disguised by a pechant for new and impenetrable terminology, but the ‘bracketing’ of eternal questions for which are to a great extent acknowledged implications of a solipistic, disembodied Cartesian ego s its starting~point, with it thought of as inessential that the thinking subject is ether embodied or surrounded by others. However by the time of Cartesian Meditations (trans. as, 1960, fist published in French as Méditations Carthusianness, 1931), a shift in priorities has begun, with the embodied individual, surrounded by others, than the disembodied Cartesian ego now returned to a fundamental position. The extent to which the desirable shift undermines the programme of phenomenology that is closely identical with Husserl’s earlier approach remains unclear, until later phenomenologists such as Merleau ~Ponty has worked fruitfully from the later standpoint.

Pythagoras established and was the central figure in school of philosophy, religion, and mathematics: He was apparently viewed by his followers as semi~divine. For his followers the regular solids (symmetrical three~dimensional forms in which all sides are the same regular polygon) with ordinary language. The language of mathematical and geometric forms seem closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans following which was the language empowering the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity must revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological of the quantum mechanical description of nature.

Pythagoras (570 Bc) was the son of Mn esarchus of Samos ut, emigrated (531 Bc) to Croton in southern Italy. Here he founded a religious society, but were forces into exile and died at Metapomtum. Membership of the society entailed self~disciplined, silence and the observance of his taboos, especially against eating flesh and beans. Pythagoras taught the doctrine of metempsychosis or te cycle of reincarnation, and was supposed ale to remember former existence. The soul, which as its own divinity and may have existed as an animal or plant, can, however gain release by a religious dedication to study, after which it may rejoin the universal world~soul. Pythagoras is usually, but doubtfully, accredited with having discovered the basis of acoustics, the numerical ratios underlying the musical scale, thereby y intimating the arithmetical interpretation of nature. This tremendous success inspired the view that the whole of the cosmos should be explicable in terms of harmonia or number. the view represents a magnificent brake from the Milesian attempt to ground physics on a conception shared by all things, and to concentrate instead on form, meaning that physical nature receives an intelligible grounding in different geometric breaks. The view is vulgarized in the doctrine usually attributed to Pythagoras, that all things are number. However, the association of abstract qualitites with numbers, but reached remarkable heights, with occult attachments for instance, between justice and the number four, and mystical significance, especially of the number ten, cosmologically Pythagoras explained the origin of the universe in mathematical terms, as the imposition of limit on the limitless by a kind of injection of a unit. Followers of Pythagoras included Philolaus, the earliest cosmosologist known to have understood that the earth is a moving planet. It is also likely that the Pythagoreans discovered the irrationality of the square root of two.

The Pythagoreans considered numbers to be among te building blocks of the universe. In fact, one of the most central of the beliefs of Pythagoras mathematihoi, his inner circle, was that reality was mathematical in nature. This made numbers valuable tools, and over time even the knowledge of a number’s name came to be associated with power. If you could name something you had a degree of control over it, and to have power over the numbers was to have power over nature.

One, for example, stood for the mind, emphasizing its Oneness. Two was opinion, taking a step away from the singularity of mind. Three was wholeness (a whole needs a beginning, a middle and its ending to be more than a one~dimensional point), and four represented the stable squareness of justice. Five was marriage~being the sum of three and two, the first odd (male) and even (female) numbers. (Three was the first odd number because the number one was considered by the Greeks to be so special that it could not form part of an ordinary grouping of numbers).

The allocation of interpretations went on up to ten, which for the Pythagoreans was the number of perfections. Not only was it the sum of the first four numbers, but when a series of ten dots are arranged in the sequence 1, 2, 3, 4, . . . each above the next, it forms a perfect triangle, the simplest of the two~dimensional shapes. So convinced were the Pythagoreans of the importance of ten that they assumed there had to be a tenth body in the heavens on top of the known ones, an anti~Earth, never seen as it was constantly behind the Sun. This power of the number ten, may also have linked with ancient Jewish thought, where it appears in a number of guised the ten commandments, and the ten the components are of the Jewish mystical cabbala tradition.

Such numerology, ascribed a natural or supernatural significance to number, can also be seen in Christian works, and continued in some new~age tradition. In the Opus majus, written in 1266, the English scientist~friar Roger Bacon wrote that: ‘Moreover, although a manifold perfection of number is found according to which ten is said to be perfect, and seven, and six, yet most of all does three claim itself perfection’.

Ten, we have already seen, was allocated to perfection. Seven was the number of planets according to the ancient Greeks, while the Pythagoreans had designated the number as the universe. Six also has a mathematical significance, as Bacon points out, because if you break it down into te factor that can be multiplied together to make it~one, two, and three~they also add up to six:

1 x 2 x 3 = 6 = 1 + 2 + 3

Such was the concern of the Pythagoreans to keep irrational numbers to themselves, bearing in mind, it might seem amazing that the Pythagoreans could cope with the values involved in this discovery. After all, as the square root of 2 cannot be represented by a ratio, we have to use a decimal fraction to write it out. It would be amazing, were it true that the Greeks did have a grasp for the existence of irrational numbers as a fraction. In fact, though you might find it mentioned that the Pythagoreans did, to talk about them understanding numbers in its way, totally misrepresented the way they thought.

At this point, as occupied of a particular place in space, and giving the opportunity that our view presently becomes fused with Christian doctrine when logos are God’s instrument in the development (redemption) of the world. The notion survives in the idea of laws of nature, if these conceived of as independent guides of the natural course of events, existing beyond the temporal world that they order. The theory of knowledge and its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, not between knowledge and the impossibility of error, the possibility of universal scepticism, sand the changing forms of knowledge that arise from new conceptualizations of the world and its surrounding surfaces.

As, anyone group of problems concerns the relation between mental and physical properties. Collectively they are called ‘the mind~body problem ‘ this problem is of its central questioning of philosophy of mind since Descartes formulated in the three centuries past, for many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often thought to be the last domain that stubbornly resists scientific understanding, and philosophers differ over whether they find that a cause for celebration or scandal, the mind~body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is far more widespread and far older, occurring in some form wherever there is a religious or philosophical tradition by which the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the way to integrate our understanding people a bearer s of physical proper ties on the one hand and as subjects of mental lives on the other.

As the motivated convictions that discoveries of non~locality have more potential to transform our conceptions of the ‘way things are’ than any previous discovery, it is, nonetheless, that these implications extend well beyond the domain of the physical sciences, and the best efforts of many thoughtful people will be required to understand them.

Perhaps the most startling and potentially revolutionary of these implications in human terms is the view in the relationship between mind and world that is utterly different from that sanctioned by classical physics. René Descartes, for reasons of the moment, was among the first to realize that mind or consciousness in the mechanistic world~view of classical physics appeared to exist in a realm separate and the distinction drawn upon ‘self~realisation’ and ‘undivided wholeness’ he lf within the form of nature. Philosophy quickly realized that there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience and distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.

Decanters’ theory of knowledge starts with the quest for certainty, for an indubitable starting~point or foundation on the basis alone of which progress is possible, sometimes known as the use of hyperbolic (extreme) doubt, or Cartesian doubt. This is the method of investigating how much knowledge and its basis in reason or experience used by Descartes in the first two Meditations. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By finding the point of certainty in my own awareness of my own self, Descartes gives a first~person twist to the theory of knowledge that dominated the following centuries in spite of various counter attacks for social and public starting~point. The metaphysic associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly understands the presence of divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invoked a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: As Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, to prove the veracity of our senses, is surely making a very unexpected circuit.’

In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to precede to insoluble problems of nature of the causal connection between the two systems running in parallel. When I stub my toe, this does not cause pain, but there is a harmony between the mental and the physical (perhaps, due to God) that ensure that there will be a simultaneous pain; when I form an intention and then act, the same benevolence ensures that my action is appropriate to my intention, or if it be to desire its resultant intention be of an action to be performed on the causal effect of some unreasonable belief. The theory has never been wildly popular, and in its application to mind~body problems many philosophers would say that it was the result of a misconceived ‘Cartesian dualism,’ it of ‘subjective knowledge’ and ‘physical theory.’

It also produces the problem, insoluble in its own terms, of ‘other minds.’ Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’ s thought is reflected in Leibniz’s view, held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure than of filling. On this basis Descartes builds a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void,’ since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of through vortices (like the motion of a liquid).

Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected often, their relentless exposures of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.

A scientific understanding of these ideas could be derived, said, Descartes, with the aid of precise deduction, and has also claimed that the contours of physical reality could be laid out in three~dimensional coordinates. Following the publication of Isaac Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s stark division between mind and matter became the most central feature of Western intellectual life.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean~Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternities’ are the guiding principals of this consciousness. Rousseau also given rythum to cause an endurable god~like semblance so that the idea of the ‘general will’ of the people to achieve these goals and declared that those who do no conform to this will were social deviants.

The Enlightenment idea of deism, which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency at the moment of creation. It also implied, however, that all the creative forces of the universe were exhausted at origins, that the physical substrates of mind were subject to the same natural laws as matter, and that the only means of mediating the gap between mind and matter was pure reason. Traditional Judeo~Christian theism, which had previously been based on both reason and revelation, responding to the challenge of deism by debasing rationality as a test of faith and embracing the idea that the truths of spiritual reality can be known only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. It also laid the foundation for the fierce competition between the mega~narratives of science and religion as frame tales for mediating relations between mind and matter and the manner in which the special character of each should be ultimately defined.

Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a different form by the nineteenth~century Romantics in Germany, England, and the United States. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi~scientific musings. In Goethe’s attempt to wed mind and matter, nature becomes a mindful agency that ‘loves illusion,’ ‘shrouds man in mist,’ ‘presses him to her heart,’ and punishes those who fail to see the ‘light.’ Schelling, in his version of cosmic unity, argues that scientific facts were at best partial truths and that the mindful dualism spirit that unites mind and matter is progressively moving toward self~realization and undivided wholeness.

The flaw of pure reason is, of course, the absence of emotion, an external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche after declaring that God and ‘divine will’ does not exist, verifiably, literature puts forward, it is the knowledge that God is dead. The death of God he calls the greatest events in modern history and the cause of extremer danger. Yet, the paradox contained in these words. He never said that there was no God, but the Eternal had been vanquished by Time and that the Immortal suffered death at the hands of mortals. ‘God is dead’. It is like a cry mingled of despair and triumph, reducing, by comparison, the whole story of atheism agnosticism before and after him to the level of respectable mediocrity and making it sound like a collection of announcements who in regret are unable to invest in an unsafe proposition:~this is the very essence of Nietzsche’s spiritual core of existence, and what flows is despair and hope in a new generation of man, visions of catastrophe and glory, the icy brilliance of analytical reason, fathoming with affected irreverence those depths until now hidden by awe and fear, and side~by~side, with it, ecstatics invocations of as ritual healer.

Nietzsche reified for ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all previous philosophical attempts to articulate the ‘will to truth.’ The problem, claimed Nietzsche, is that earlier versions of the ‘will to truth’ disguise the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual ‘will.’

In Nietzsche’s view, the separation between ‘mind’ and ‘matter’ is more absolute and total had previously been imagined. Based on the assumptions that there are no really necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language.’ The prison as he conceived it, however, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on will.

Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to reduce mind to a mere material substance, and by that to displace or subsume the separateness and uniqueness of mind with mechanistic descriptions that disallow a basis for the free exercise of individual will.

Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless scientific universe proved terribly influential on twentieth~century thought. Nietzsche sought to reinforce his view on subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. As it turned out, these efforts resulted in paradoxes of recursion and self~reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.

Nietzsche appealed to this crisis in an effort to reinforce his assumption that, without onotology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl, attempted to preserve the classical view of correspondence between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self~consistency and rigour. It represented a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.

Since Husserl’s epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian duality. It also suggests that the hidden onotology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists~social scientists and scientists~engineers than we has preciously imagined. The central question in this late~nineteenth~century debate over the status of the mathematical description of nature as the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, without any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, it should first examine in more detail that intimate and ongoing dialogue between physics and metaphysics in Western thought.

Through a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.

For Nietzsche, however, all the activities of human consciousness share the predicament of psychology. There can be, for him, no ‘pure’ knowledge, only satisfaction, however sophisticated, of an ever~varying intellectual need of the will to know. He therefore demands that man should accept moral responsibility for the kind of questioned he asks, and that he should realize what values are implied in he answers he asks~and in this he was more Christian than all our post~Faustian Fausts of truth and scholarship. ‘The desire for truth,’ he says, ‘is itself in need of critique. Let this be the definition of my philosophical task. By way of excrement, one will question for oneself the value of truth.’ and does he not. He protests that, in an age that is as uncertain of its values as is his and ours, the search for truth will issue in the similarly of trivialities or~catastrophe. We might wonder how he would react to the pious hope of our day that the intelligence and moral conscience of politicians will save the world from the disastrous products of our scientific explorations and engineering skills. It is perhaps not too difficult to guess; for he knew that there was a fatal link between the moral resolution of scientists to follow the scientific search wherever, by its own momentum, it will take us, and te moral debility of societies not altogether disinclined to ‘apply’ the results, however catastrophic, believing that there was a hidden identity among all the expressions of the ‘Will to Power’, he saw the element of moral nihilism in the ethics of our science: Its determination not to let ‘higher values’ interfere with its highest value ~Truth (as it conceives it). Thus he said that the goal of knowledge pursued by the natural sciences means perdition.

In these regions of his mind dwells the terror that he may have helped to bring about the very opposite of what he desired. When this terror comes to the force, he is much afraid of the consequences of his teaching. Perhaps, the best will be driven to despair by it, the very worst accept it? Once he put into the mouth of some imaginary titanic genius what is his most terrible prophetic utterance: ‘Oh grant madness, your heavenly powers, madness that at last I may believe in myself,

. . . I am consumed by doubts, for I have killed the Law. If I am not more than the Law, then I am the most abject of all men’.

Still ‘God is dead,’ and, sadly, that he had to think the meanest thought: He saw in the real Christ an illegitimate son of the Will to power, a flustrated rabbi sho set out to save himself and the underdog human from the intolerable strain of importantly resending the Caesars~not to be Caesar was now proclaimed a spiritual disjunction~a newly invented form of power, the power to be powerless.

It is the knowledge that God is dead, and suffered death at the hands of mortals: ‘God is dead’: It is like a cry mingled of despair ad triumph, reducing the whole story of theism nd agnosticism before and after him to the level of respectable mediocrity nd masking it sound like a collection of announcement. Nietzsche, for the nineteenth century, brings to its perverse conclusion a line of religious thought and experience linked with the names of St. Paul, St. Augustin, Pascal, Kierkegaard, and Dostoevsky, minds for whom God was not simply the creator of an order of nature within which man has his clearly defined place, but to whom He came in order to challenge their natural being, masking demands that appeared absurd in the light of natural reason. These men are of the family of Jacob: Having wrestled with God for His blessing, they ever after limp through life with the framework of Nature incurably out of joint. Nietzsche is just a wrestler, except within him the shadow of Jacob merges with the shadow of Prometheus. Like Jacob, Nietzsche too believed that he prevailed against God in that struggle, and won a new name for himself, the name of Zarathustra. Yet the words he spoke on his mountain to the angle of the Lord were: ‘I will not let thee go, but thou curse me.’ Or, in words that Nietzsche did in fact speak: ‘I have on purpose devoted my life to exploring the whole contrast to a truly religious nature. I know the Devil and all his visions of God.’ ‘God is dead,’ is the very core of Nietzsche’s spiritual existence, and what follows is despair and hope in a new greatness of man.

Further to issues are the best~known disciple that Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean~Paul Sartre. His first novel, La Nausée, was published in 1938 (trans. As Nausea, 1949). Lʹ̀Imginaire (1940, trans. as The Psychology of the Imagination, 1948) is a contribution to phenomenal psychology. Briefly captured by the Germans, Sartre spent the ending of war years in Paris, where Lʹ Être et le néant, his major purely philosophical work, was published in 1945 (trans. as Being and Nothingness, 1956). The lecture Lʹ Existentialisme est un humanisme (1946, trans. as Existentialism is a Humanism, 1947) consolidated Sartre’s position as France’s leading existentialist philosopher.

Sartre’s philosophy is concerned entirely with the nature of human life, and the structures of consciousness. As a result it gains expression in his novels and plays as well as in more orthodox academic treatises. Its immediate ancestors is the Phenomenological tradition of his teachers, and Sartre can most simply be seen as concerned to rebut the charge of idealism as it is laid at the door of phenomenology. The agent is not a spectator of the world, but, like everything in the world, constituted by acts of intentionality. The self constituted is historically situated, but as an agent whose own mode of finding itself in the world makes for responsibility and emotion. Responsibility is, however, a burden that we cannot frequently bear, and bad faith arises when we deny our own authorship of our actions, seeing then instead as forced responses to situations not of our own making.

Sartre thus locates the essential nature of human existence in the capacity for choice, although choice, being equally incompatible with determinism and with the existence of a Kant’sian moral law, implies a synthesis of consciousness (being for~itself) and the objective(being in~itself) that is forever unstable. The unstable and constantly disintegrating nature of free~will generates anguish. For Sartre our capacity to make negative judgement is one fundamental puzzles of consciousness. Like Heidegger he took the ‘ontological’ approach of relating to the nature of nonbeing, a move that decisively differentiated him from the Anglo~American tradition of modern logic.

The work of Husserl, Heidegger and Sartre became foundational to that of the principal architects of philosophical postmodernism, Deconstructionists Jacques Lacan, Roland Barthes, Michel Foucault, and Jacqures Derrida. This direct linkage among the nineteenth~century crises about the epistemological foundations of mathematical physics and the origins of philosophical postmodernism served to perpetrate the Cartesian two world dilemmas in an even more oppressive form.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

Nonetheless, even like Planck and Einstein understood and embraced hoboism as an inescapable condition of our physical existence. According to Einstein’s general relativity theory, wrote Planck, ‘each individual particle of a system in a certain sense, at any one time, exists simultaneously in every part of the space occupied by the system’. The system, as Planck made clear, is the entire cosmos. As Einstein put it, ‘physical reality must be described in terms of continuos functions in space. The material point, therefore, can hardly be conceived any more as the basic concept of the theory.’

As for Newton, a British mathematician , whereupon the man Hume called Newton, ‘the greatest and rarest genius that ever arose for the ornament and instruction of the species.’ His mathematical discoveries are usually dated to between 1665 and 1666, when he was secluded in Lincolnshire, the university being closed because of the plague His great work, the Philosophae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy, usually referred to as the Principia), was published in 1687.

Yet throughout his career, Newton engaged in scientific correspondence and controversy. The often~quoted remark, ‘If I have seen further it is by standing on the shoulders of Giant’s occurs in a conciliatory letter to Robert Hooke (1635~1703). Newton was in fact echoing the remark of Bernard of Chartres in 1120: ‘We are dwarfs standing on the shoulders of giants’. The dispute with Leibniz over the invention of the calculus is his best~known quarrel, and abound with restrictive limitation that gave specified allowances given to spiritual insight under which for Newton himself, did appoint the committee of the Royal Society that judged the question of precedence, and then writing the report, the Commercium Epistolicum, awarding himself the victory. Although was himself of the ‘age of reason,’ Newton was himself interested in alchemy, prophesy, gnostic, wisdom and theology,

Philosophical influence of Principia was incalculable, and from Locke’s Essay onward philosophers recognized Newton’s work as a new paradigm of scientific method, but without being entirely clear what different parts reason and observation play in the edifice. Although Newton ushered in so much of the scientific world~view, overall scholium at the end of Principia, he argues that ‘it is not to be conceived that mere mechanical causes could give birth to so many regular motions’ and hence that his discoveries pointed to the operations of God, ‘to discourse of whom from phenomena does notably belong to natural philosophy.’ Newton confesses that he has ‘not been able to discover the cause of those properties of gravity from phenomena’: Hypotheses non fingo (I do not make hypotheses). It was left to Hume to argue that the kind of thing Newton does, namely place the events of nature into law~like orders and patterns, is the only kind of thing that scientific enquiry can ever do.

An ‘action at a distance’ is a much contested concept in the history of physics. Aristotelian physics holds that every motion requires a conjoined mover. Action can therefore never occur at a distance, but needs a medium enveloping the body, and of which parts befit its motion and pushes it from behind (antiperistasis). Although natural motions like free fall and magnetic attraction (quaintly called ‘coition’) were recognized in the post~Aristotelian period, the rise of the ‘corpusularian’ philosophy. Boyle expounded in his Sceptical Chemist (1661) and The Origin and Form of Qualifies (1666), held that all material substances are composed of minutes corpuscles, themselves possessing shape, size, and motion. The different properties of materials would arise different combinations and collisions of corpuscles: chemical properties, such as solubility, would be explicable by the mechanical interactions of corpuscles, just as the capacity of a key to turn a lock is explained by their respective shapes. In Boyle’s hands the idea is opposed to the Aristotelean theory of elements and principles, which he regarded as untestable and sterile. His approach is a precursor of modern chemical atomism, and had immense influence on Locke, however, Locke recognized the need for a different kind of force guaranteeing the cohesion of atoms, and both this and the interaction between such atoms were criticized by Leibniz. Although natural motion like free fall and magnetic attraction (quality called ‘coition’) were recognized in the post~Aristotelian period , the rise of the ‘corpusularian’ philosophy again banned ‘attraction; or unmediated action at a distance: the classic argument is that ‘matter cannot act where it is not again banned ‘attraction’, or unmediated action at a distance: The classic argument is that ‘matter cannot act where it is not’.

Cartesian physical theory also postulated ‘subtle matter’ to fill space and provide the medium for force and motion. Its successor, the a ether, was populated in order to provide a medium for transmitting forces and causal influences between objects that are not in directorially contact. Even Newton, whose treatment of gravity might seem to leave it conceived of a action to a distance, opposed that an intermediary must be postulated, although he could make no hypothesis as to its nature. Locke, having originally said that bodies act on each other ‘manifestly by impulse and nothing else’. However, changes his mind and strike out the words ‘and nothing else,’ although impulse remains ‘the only way that we can conceive bodies function in’. In the Metaphysical Foundations of Natural Science Kant’s clearly sets out the view that the way in which bodies impulse each other is no more natural, or intelligible, than the way inn that they act at a distance, in particular he repeats the point half~understood by Locke, that any conception of solid, massy atoms requires understanding the force that makes them cohere as a single unity, which cannot itself be understood in terms of elastic collisions. In many cases contemporary field theories admit of alternative equivalent formulations, one with action at a distance, one with local action only.

Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1915) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity (1915).



Hurleys’ consideration of what she calls Marcel's case. Here subjects are asked to report the appearance of some item in consciousness in three ways at the same time~say, by blinking, pushing a button, and saying, ‘I see it’. Remarkably, any of these acts can be done without the other two. The question is, What does this allude to unified consciousness? In a case in which the subject pushes the button but neither blinks nor says anything, for example, is the hand~controller aware of the object while the blink~controller and the speech~controller are not? How could the conscious system become fragmented in such a way?

Hurleys’ stipulation is that they cannot. What induces the appearance of incoherence about unity is the short time scale. Suppose that it takes some time to achieve unified consciousness, perhaps because some complex reaction’s processes are involved. If that were the case, then we do not have a stable unity situation in Marcel's case. The subjects are not given enough time to achieve unified consciousness of any kind.

There is a great deal more to Hurley's work. She urges, for example, that theirs a normative dimension to unified consciousness~conscious states have to cohere for unified consciousness to result. Systems in the brain have to achieve her calls ‘dynamic singularity’~being a single system~for unified consciousness to result.

A third issue that got philosophers working on the unity of consciousness again is binding. Here the connection is more distant because binding as usually understood is not unified consciousness as we have been discussing it. Recall the two stages of cognition laid out earlier. First, the mind ties together various sensory information into representations of objects. Then the mind ties these represented objects to one other to achieve unified consciousness of a number of them at the same time. It is the first stage that is usually called binding. The representations that result at this stage need not be conscious in any of the ways delineating earlier~many perfectly good representations affect behaviour and even enter memory without ever becoming conscious. Representations resulting from the second stage need not be conscious, either, but when they are, we have at least some of the kinds of unified consciousness delineated.

In the past few decades, philosophers have also worked on how unified consciousness relates to the brain. Lockwood, for example, thinks that relating consciousness to matter will involve more issues on the side of matter than most philosophers think. (We mentioned that his work goes off in two new directions. This is the second one.) Quantum mechanics teach us that the way in which observation links to physical reality is a subtle and complex matter. Lockwood urges that our conceptions will have to be adjusted on the side of matter as much as on the side of mind if we are to understand consciousness as a physical phenomenon and physical phenomena as open to conscious observation. If it is the case not only that our understanding of consciousness is affected by how we think it might be implemented in matter but also that process of matter is affected by our (conscious) observation of them, then our picture of consciousness stands as ready to affect our picture of matter as vice~versa.

The Churchlands, Paul M. and Patricia S. and Daniel Dennett (1991) has radical views of the underlying architecture of unified consciousness. The Churchlands see unity itself much as other philosophers do. They do argue that the term ‘consciousness’ covers a range of different phenomena that need to be distinguished from another but the important point that presents to some attending characteristic is that they urge that the architecture of the underlying processes probably consist not of transformations of symbolically encoded objects of representations, as most philosophers have believed, but of vector transformations in what are called phase spaces. Dennett articulates an even more radical view, encompassing both unity and underlying architecture. For him, unified consciousness is simply a temporary ‘virtual captain’, a small group of related information~parcels that happens to gain temporary dominance in a struggle for control of such cognitive activities as self~monitoring and self~reporting in the vast array of microcircuits of the brain. We take these transient phenomena to be more than they are because each of them holds to some immediacy of ‘me’, particularly of the moment; The temporary coalition of conscious states winning at the moment is what I am, is the self. Radical implementation, narrowed range and transitoriness notwithstanding, when unified consciousness is achieved, these philosophers tend to see it in the way we have presented it.

Dennett's and the Churchlands' views fit naturally with a dynamic systems view of the underlying neural implementation. The dynamic systems view is the view that unified consciousness is a result of certain self~organizing activities in the brain. Dennett thinks that given the nature of the brain, a vast assembly of neurons receiving electrochemical signals from other neurons and passing such signals to yet other neurons, cognition could not take any form other than something like a pandemonium of competing bits of content, the ones that win the competitions being the ones that are conscious. The Churchlands nonexistence tends to agree with Dennett about this. They see consciousness as a state of the brain, the ‘wet~ware’, not a result of information processing, of ‘software’. They also advocate a different picture of the underlying neurological process. As we said, they think that transformations of complex vectors in a multi~dimensional phase space are the crucial processes, not competition among bits of content. However, they agree that it is very unlikely that the processes that subserve unified consciousness are sentence~like or language~like at all. It is too early to say whether these radically novel pictures of what the system that implements unified consciousness is like will hold any important implications for what unified consciousness is or when it is present.

Hurley is also interested in the relationship of unified consciousness to brain physiology. Saying it of her that she resists certain standard ways of linking them would be truer, however, than to say that she herself links them. In particular, while she clearly thinks that physiological phenomena have all sorts of implications and give rise to all sorts of questions about the unity of consciousness, she strongly resists any simplistic patterns of connection. Many researchers have been attracted by some variant of what she calls the isomorphism hypothesis. This is the idea that changes in consciousness will parallel changes in brain structure or function. She wants to insist, to the contrary, that often two instances of the same change in consciousness will go with very different changes in the brain. We saw an example in the last section. In most of us, unified consciousness is closely linked to an intact, functioning corpus callosum. However, in acallosal people, there may be the same unity but achieved by mechanisms such as cuing activity external to the body that are utterly different from communication though a corpus callosum. Going the opposite way, different changes in consciousness can go with the same changes to structure and function in the brain.

Two philosophers have gone off in directions different from any of the above, Stephen White (1991) and Christopher Hill (1991). White's main interest is not the unity of consciousness as such but what one might call the unified locus of responsibility~what it is that ties something together to make it a single agent of actions, i.e., something to which attributions of responsibility can appropriately be made. He argues that unity of consciousness is one of the things that go into becoming unified as such an agent but not the only thing. Focussed coherent plans, a continuing single conception of the good, with reason of a good autobiographical memory, certain future states of persons mattering to us in a special way (mattering to us because we take them to be future states of ourselves, one would say if it were not blatantly circular), a certain continuing kind and degree of rationality, certain social norms and practices, and so forth. In his picture of moral responsibility, unbroken unity of consciousness at and over time is only a small part of the story.

Hills’ fundamental claim is that a number of different relationships between psychological states have a claim to be considered unity relationships, including: Being owned by the same subject, being [phenomenally] next to (and other relationships that state in the field of consciousness appear to have to one another), as both embrace the singularity of objects contained of other conscious states, and jointly having the appropriate sorts of effects (functions). An interesting question, one that Hill does not consider, is whether all these relations are what interests us when we talk about the unity of consciousness or only some of them (and if only some of them, which ones). Hill also examines scepticism about the idea that clearly bounded individual conscious states exist. Since we have been assuming throughout that such states do exist, it is perhaps fortunate that Hill argues that we could safely do so.

In some circles, the idea that consciousness has a special kind of unity has fallen into disfavour. Nagel (1971), Donald Davidson (1982), and Dennett (1991) have all urged that the mind's unity has been greatly overstated in the history of philosophy. The mind, they say, works mostly out of the sight and the control of consciousness. Moreover, even states and acts of ours that are conscious can fail to cohere. We act against what we know perfectly well to be our own most desired courses of action, for example, or do things while telling ourselves that we must avoid doing them. There is an approach to the small incoherencies of everyday life that does not requires us to question whether consciousness is unified in this way, the Freudian approach (e.g., Freud 1916/17). This approach accepts that the unity of consciousness exists much as it presents itself but argues that the range of material over which it extends is much smaller than philosophers once thought. This latter approach has some appeal. If something is out of sight and/or control, it is out of the sight or control of what? The answer would seem to be, the unified conscious mind. If so, the only necessary difference among the pre~twentieth centuries visions of unified consciousness as ranging over everything in the mind and our current vision of unified consciousness is that the range of psychological phenomena over which unified consciousness ranges has shrunk.

A final historical note. At the beginning of the 21st century, work on the unity of consciousness continues apace. For example, a major conference was recently devoted to the unity of consciousness, the Association for the Scientific Study of Consciousness Conference assembled inside Brussels in 2000, and the Encyclopaedias of philosophy (such as this one) and of cognitive science are commissioning articles on the topic. Psychologists are taking up the issue. Bernard Baars (1988, 1997) notion of the global workspace is an example. Another example is work on the role of unified consciousness in precise control of attention. However, the topic is not yet at the centre of consciousness studies. One illustration of this is that it can still be missing entirely in anthologies of current work on consciousness.

With a different issue, philosophers used to think that the unity of consciousness has huge implications for the nature of the mind, indeed entails that the mind could not be made out of matter. We also saw that the prospects for this inference are not good. What about the nature of consciousness? Does the unity of consciousness have any implications for this issue?

There are currently at least three major camps on the nature of consciousness. One camp sees the ‘felt quality’ of representations as something unique, in particular as quite different from the power of representations to change other representations and shape belief and action. On this picture, representations could function much as they do without it being like anything to have them. They would merely not be conscious. If so, consciousness may not play any important cognitive role at all, its unity included (Jackson 1986; Chalmers 1996). A second camp holds, to the contrary, that consciousness is simply a special kind of representation (Rosenthal 1991, Dretske 1995, and Tye 1995). A third holds that what we label ‘consciousness’ are really something else. On this view, consciousness will in the end be ‘analysed away’~the term is too coarse~grained and presents things in too unquantifiable a way to have any use in a mature science of the mind.

The unity of consciousness obviously has strong implications for the truth or falsity of any of these views. If it is as central and undeniable as many have suggested, its existence may cut against the eliminativist position. With respect to the other positions, in that the unity of consciousness seems neutral.

Whatever its implications for other issues, the unity of consciousness seems to be a real feature of the human mind, indeed central to it. If so, any complete picture of the mind will have to provide an account of it. Even those who hold that the extent to which consciousness is unified has been overrated owing us and account of what has been overrated.

To say one has an experience that is conscious (in the phenomenal sense) is to say, that one is in a state of its seeming to one some way. In another formulation, to say experience is conscious is to say that there is something that stands alone, like for only one to have. Feeling pain and sensing colours are common illustrations of phenomenally conscious states. Consciousness has also been taken to consist in the monitoring of one's own state of mind (e.g., by forming thoughts about them, or by somehow ‘sensing’ them), or else in the accessibility of information to one's capacity for rational control or self~report. Intentionality has to do with the directedness or aboutness of mental states~the fact that, for example, one's thinking is of or about something. Intentionality includes, and is sometimes taken to be equivalent to, what is called ‘mental representation.’

It can seem that consciousness and intentionality pervade mental life ~perhaps, but one or both somehow constitute what it is to have a mind. But achieving an articulate general understanding of either consciousness or intentionality presents, an enormous challenge, part of which lies in figuring out how the two are related. Is one in some sense derived from or dependent on the other? Or are they perhaps quite independent and separate aspects of mind?

One frequent understanding among philosophers, that consciousness is a certain feature shared by sense~experience and imagery, perhaps belonging also to a broad range of other mental phenomena (e.g., episodic thought, memory, and emotion). It is the feature that consists in its seeming some way to one to have experiences. To put it another way: Conscious states are states of its seeming somehow to a subject.

For example, it seems to you some way to see red, and seems to you in another way, to hear a crash, to visualize a triangle, and to suffer pain. The sense of ‘seems’ relevant here may be brought out by noting that, in the last example, we might just as well speak of the way it feels to be in pain. And~some may say~in the same sense, it seems to you some way to think through the answer to a math problem, or to recall where you parked the car, or to feel anger, shame, or elation. (However, that it is not simply to be assumed that saying it seems some way to you to have an experience is equivalent to saying that the experience itself seems or appears some way to you~that it, is~an object of appearance. The point is just that the way something sounds to you, the way something looks to you, etc., all constitute ‘ways of seeming.’) States that are conscious in this sense are said to have some phenomenal character or other~their phenomenal character being the specific way it seems to one to have a given experience. Sometimes this is called the ‘qualitative’ or ‘subjective’ character of experience.

Another oft~used means for trying to get at the relevant notion of consciousness, preferable to some, is to say that there is, in a certain sense, always ‘something it is like’ to be in a given conscious state~something it has, in the like for one who is in that state. Relating the two locutions, we might say: There is something it is like for you to see red, to feel pain, etc., and the way it seems to you to have one of these experiences is what it is like for you to have it. The phenomenal character of an experience then, is what someone would inquire about by asking, e.g., ‘What is it like to experience orgasm?’~and it is what we speak of when we say that we know what that is like, even if we cannot convey this to one who does not know. And, if we want to speak of persons, or other creatures (as distinct from their states) being conscious, we will say that they are conscious just if there is something it is like for them to be the creature they are~for example, something it is like to be a nocturnal creature as inferred too as a bat.

The examples of conscious states given comprise a various lot. But some sense of their putative unity as instances of consciousness might be gained by contrasting them with what we are inclined to exclude, or can at least conceive of excluding, from their company. Much of what goes on, but we would ordinarily believe is not (or at any rate, we may suppose is not) conscious in the sense at issue. The leaf's fall from a tree branch, we may suppose, is not a conscious state of the leaf~a state of its seeming somehow to the leaf. Nor, for that matter, is a person falling off a branch held of a conscious state~is rather the feeling of falling the sort of consciousness, if anything is. Dreaming of falling would also be a conscious experience in this sense. But, while we can in some way be said to sense the position of our limbs even while dreamlessly asleep, we may still suppose that this proprioception (though perhaps in some sense a mental or cognitive affair) is not conscious~we may suppose that it does not then seem (or feel) any way to us sleepers to sense our limbs, as ordinarily it does when we are awake.

The way of seeming’ or ‘what it is like’ conception of consciousness I have just invoked is sometimes marked by the term ‘phenomenal consciousness.’ But this qualifier ‘phenomenal’ suggests that there are other kinds of consciousness (or perhaps, other senses of ‘consciousness’). Indeed there are, at least, other ways of introducing notions of consciousness. And these may appear to pick out features or senses altogether distinct from that just presented. For example, it is said that some (but not all) that goes on in the mind is ‘accessible to consciousness.’ Of course this by itself does not so much specifies a sense of ‘conscious’ as put one in use. (One will want to ask: And just what is this ‘consciousness’ that has ‘access’ to some mental goings~on but not others, and what could ‘access’ efforts that mean in of having it anyway? However, some have evidently thought that, rather than speak of consciousness as what has access, we should understand consciousness as itself a certain kind of susceptibility to access. For example, Daniel Dennett (1969) once theorized that one's conscious states are just those whose contents are available to one's direct verbal report~or, at least, to the ‘speech centre’ responsible for generating such reports. And Ned Form (1995) has proposed that, on one understanding of ‘conscious,’ (to be found at work in many ‘cognitive’ theories of consciousness) a conscious state is just a ‘representation poised for free use in reasoning and other direct ‘rational’ control of action and speech.’ Form labels consciousness in this sense ‘excess consciousness.’

Forms’ would insist that we should distinguish phenomenal consciousness from ‘excess consciousness’, and he argues that a mental representation's being poised for use in reasoning and rational control of action is neither a necessary nor a sufficient condition for the state's being phenomenally conscious. Similarly he distinguishes phenomenal consciousness from what he calls ‘reflexive consciousness’~where this has to do with one's capacity to represent one's mind's to oneself~to have, for example, thoughts about one's own thoughts, feelings, or desires. Such a conception of consciousness finds some support in a tendency to say that conscious states of mind are those one is ‘conscious of’ or ‘aware of’ being in, and to interpret this ‘of’ to indicate some kind of reflexivity is involved~wherein one represents one's own mental representations. On one prominent variant of this conception, consciousness is taken to be a kind of scanning or perceiving of one's own psychological states or processes~an ‘inner sense.’

Forming a threefold division of our phenomenon, whereby its access, and reflexive consciousness need not be taken to reflect clear and coherent distinctions already contained in our pre~theoretical use of the term ‘conscious.’ Form seems to think that (on the contrary) our initial, ordinary use of ‘conscious’ is too confused even to count as ambiguous. Thus in articulating an interpretation, or set of interpretations, of the term adequate to frame theoretical issues, we cannot simply describe how it is currently employed~we must assign it a more definite and coherent meaning than extant in common usage.

Whether or not this is correct, getting a solid ground here is not easy, and a number of theorists of consciousness would balk at proceeding on the basis of Form's proposed threefold distinction. Sometimes the difficulty may be merely terminological. John Searle, for example, would recognize phenomenal consciousness, but deny Form's other two candidates are proper senses of ‘conscious’ at all. The reality of some sort of access and reflexivity is apparently not at issue~just whether either captures a sense of ‘conscious’ (perhaps confusedly) woven into our use of the term. However, in contrast to both Form and Searle, there are also those who raise doubt that there is a properly phenomenal sense we can apply, distinct from both of the other two, for us to pick out with any term. This is not just a dispute about words, but about what there is for us to talk about with them.

The substantive issues here are very much bound up with differences over the proper way to conceive of the relationship between consciousness and intentionality. If there are distinct senses in which states of mind could be correctly said to be ‘conscious’ (answering perhaps to something like Form's three~fold distinction), then there will be distinct questions we can pose about the relation between consciousness and intentionality. But if one of Form's alleged senses is somehow fatally confused, or if he is wrong to distinguish it from the others, or if it is the sense of no term we can with warrant apply to ourselves or our states, then there will be no separate question in which it figures we should try to answer. Thus, trying to work out a reasoned view about what we are (or should be) talking about when we talk about consciousness is an unavoidable and non~trivial part of trying to understand the relation between consciousness and intentionality.

To clarify further the disputes about consciousness and their links to questions about its relation to intentionality, we need to get an initial grasp of the relevant way the terms ‘intentionality’ and ‘intentional’ are used in philosophy of mind.

Previously, some indication of why it is difficult to get a theory of consciousness started. While the term ‘conscious’ is not esoteric, its use is not easily characterized or rendered consistent in a manner providing some uncontentious framework for theoretical discussion. Where the term ‘intentional’ is concerned, we also face initially confusing and contentious usage. But here the difficulty lies partly in the fact that the relevant use of cognate terms is simply not that found in common speech (as when we speak of doing something ‘intentionally’). Though ‘intentionality,’ in the sense here at issue, does seem to attach to some real and fundamental (maybe even defining) aspect of mental phenomena, the relevant use of the term is tangled up with some rather involved philosophical history.

One way of explaining what is meant by ‘intentionality’ in the (more obscure) philosophical sense is this: it is that aspect of mental states or events that consists in their being of or about things, as pertains to the questions, ‘What are you thinking of?’ And, what are you thinking about?’ Intentionality is the aboutness or directedness of mind (or states of mind) to things, objects, states of affairs, events. So if you are thinking about San Francisco, or about the increased cost of living there, or about your meeting someone there at Union Square~your mind, your thinking, is directed toward San Francisco, or the increased cost of living, or the meeting in Union Square. To think at all is to think of or about something in this sense. This ‘directedness’ conception of intentionality plays a prominent role in the influential philosophical writings of Franz Brentano and those whose views developed in response to his.

But what kind of ‘aboutness’ or ‘of~ness’ or ‘directedness’ is this, and to what sorts of things does it apply? How do the relevant ‘intentionality~marking’ senses of these words (‘about,’ ‘of,’ ‘directed’) differ from? : the sense in which the cat is wandering ‘about’ the room; the sense in which someone is a person ‘of’ high integrity; the sense in which the river's course is ‘directed’ toward the fields?

It has been said that the peculiarity of this kind of directedness/aboutness/of~ness lies in its capacity to relate thought or experience to objects that (unlike San Francisco) do not exist. One can think about a meeting that has not, or will never occur; One can think of Shangri La, or El Dorado, or the New Jerusalem, as one may think of their shining streets, of their total lack of poverty, or their citizens' peculiar garb. Thoughts, unlike roads, can lead to a city that is not there.

But to talk in this way only invites new perplexities. Is this to say (with apparent incoherence) that there are cities that do not exist? And what does it mean to say that, when a state of mind is in fact directed toward’ something that does exist, that state nevertheless could be directed toward something that does not exist? It can well seem to be something very fundamental to the nature of mind that our thoughts, or states of mind more generally, can be of or about things or ‘point beyond themselves.’ But a coherent and satisfactory theoretical grasp of this phenomenon of ‘mental pointing’ in all its generality is difficult to achieve.

Another way of trying to get a grip on the topic asks us to note that the potential for a mental directedness toward the non~existent be evidently closely associated with the mind's potential for falsehood, error, inaccuracy, illusion, hallucination, and dissatisfaction. What makes it possible to believe (or even just suppose) something about Shangri La is that one can falsely believe (or suppose) that something exists? In the case of perception, what makes it possible to seem to see or hear what is not there is that one's experience may in various ways be inaccurate, non~existent, subject to illusion, or hallucinatory. And, what makes it possible for one's desires and intentions to be directed toward what does not and will never exist is that one’s desire and intentions can be unfulfilled or unsatisfied. This suggests another strategy for getting a theoretical hold on intentionality, employing a notion of satisfaction, stretched to encompass susceptibility to each of these modes of assessment, each of these ways in which something can either go right, or go wrong (true/false, veridical/nonveridical, fulfilled/unfulfilled), and speak of intentionality in terms of having ‘conditions of satisfaction.’ On John Searle's (1983) conception, intentional states are those having conditions of satisfaction. What are conditions of satisfaction? In the case of belief, these are the conditions under which the belief is true; Even so, the instance of perception, they are the conditions under which sense~experience is veridical: In the case of intention, the conditions under which an intention is fulfilled or carried out.

However, while the conditions of satisfaction approach to the notion of intentionality may furnish an alternative to introducing this notion by talking of ‘directedness to objects,’ it is not clear that it can get us around the problems posed by the ‘directedness’ talk. For instance, what are we to say where thoughts are expressed using names of nonexistent deities or fictional characters? Will we do away with a troublesome directedness to the nonexistent by saying that the thoughts that Zeus is Poseidon's brother, and that Hamlet is a prince, is just false? This is problematic. Moreover, how will we state the conditions of satisfaction of such thoughts? Will this not also involve an apparent reference to the nonexistent?

A third important way of conceiving of intentionality, one particularly central to the analytic tradition derived from the study of Frege and Russell whom asks us to concentrate on the notion of mental (or intentional) content. Often, it is assumed to have intentionality is to have content. And frequently mental content is otherwise described as representational or informational content~and ‘intentionality’ (at least, as this applies to the mind) is seen as just another word for what is called ‘mental representation,’ or a certain way of bearing or carrying information.

But what is meant by ‘content’ here? As a start we may note: The content of thought, in this sense, is what, is reported when answering the question, What does she think? By something of the form, ‘She thinks that p.’ And the content of thought is what two people are said to share, when they are said to think the same thought. (Similarly, that contents of belief are what two persons commonly share when they hold the same belief.) Content is also what may be shared in this way even while ‘psychological modes’ of states of mind may differ. For example: Believing that I will soon be bald and fearing that I will soon be a bald share in that the content of bald shares that I will soon be bald.

Also, commonly, content is taken as not only that which is shared in the ways illustrated, but that which differs in a way revealed by considering certain logical features of sentences we use to talk about states of mind. Notably: the constituents of the sentence that fills in for ‘p’ when we say ‘x thinks that p’ or ‘x believes that p’ are often interpreted in such a way that they display ‘failures of substitutivity’ of (ordinarily) co~referential or co~extensional expressions, and this appear to reflect differences in mental content. For example: if George W. Bush is the eldest son of the vice~president under Ronald Reagan, and George W. Bush is the current US. President, then it can be validly inferred that the eldest son of Reagan's vice~president is the current US President. However, we cannot always make the same sort of substitutions of terms when we use them to report what someone believes. From the fact that you believe that George W. Bush is the current US. President, we cannot validly infer that you believe that the eldest son of Reagan's vice~president is the current US. President. That last may still be false, even if George W. Bush is indeed the eldest son. These logical features of the sentences ‘x believes that George W. Bush is the current US. President’ and ‘x believe that George W. Bush is the eldest son of Reagan's vice~president’ seem to reflect the fact that the beliefs reported by their use have different contents: these sentences are used by someone to state what is believed (the belief content), and what is believed in each case is not just the same. Someone's belief may have the one content without having the other.

Similar observations can be made for other intentional states and the reports made of them~especially when these reports contain an object clause beginning with ‘that’ and followed by a complete sentence (e.g., she thinks that p; He intends that p; She hopes that p and the fear that p; She sees that p). Sometimes it is said that the content of the states is ‘given’ by such a ‘that p’ clause when ‘p’ is replaced by a sentence~the so~called ‘content clause.’

This ‘possession of content’ conception of intentionality may be coordinated with the ‘conditions of satisfaction’ conception roughly as follows. If states of mind contrast in respect of their satisfaction (say, one is true and the other false), they differ in content. (One and the same belief content cannot be both true and false~at least not in the same context at the same time.) And if one says what the intentional content of a state of mind is, one says much or perhaps all of what conditions must be met if it is to be satisfied~what its conditions of truth, or veridicality, or fulfilment, are. But one should be alert to how the notion of content employed in a given philosopher's views is heavily shaped by these views. One should note how commonly it is held that the notion of the finding representation of content is of that way of an ambiguous or in need of refinement. (Consider, for example: Jerry Fodor's) defence of a distinction between ‘narrow’ and ‘wide’ content, as Edward Zalta’s characterlogical distinction between ‘cognitive’ and ‘objective content’ (1988), and that of John Perry's distinction between ‘reflexive' and ‘subject~matter content’.

It is arguable that each of these gates of entry into the topic of intentionality (directedness, condition of satisfaction, and mental content) opens onto a unitary phenomenon. But evidently there is also considerable fragmentation in the conceptions of both consciousness and intentionality that are in the field. To get a better grasp of some of the ways the relationship between consciousness and intentionality can be viewed, without begging questions or trying to present a positive theory on the topic, it is useful to take a look at the recent history of thinking about intentionality, in a way that will bring several issues about its relationship with consciousness to the fore. Together with the preceding discussion, this should provide the background necessary for examining some of the differences that divide those who theorize about consciousness that is very intimately involved with views of the consciousness~intentionality relation.

If we are to acknowledge the extent to which the notion of intentionality is the creature of philosophical history, we have to come to terms with the divide in twentieth century western philosophy between so~called ‘analytic’ and ‘continental’ philosophical traditions. Both have been significantly concerned with intentionality. But differences in approach, vocabulary, and background assumptions have made dialogue between them difficult. It is almost inevitable, in a brief exposition, to give largely independent summaries of the two. We will start with the ‘continental’ side of the story~more, specifically, with the Phenomenological movement in continental philosophy. However, while these traditions have developed without a great deal of intercommunication, they do have common sources, and have come to focus on issues concerning the relationship of consciousness and intentionality that are recognizably similar.

A thorough look at the historical roots of controversies over consciousness and intentionality would take us farther into the past than it is feasible to go in this article. A relatively recent, convenient starting point would be in the philosophy of Franz Brentano. He more than any other single thinker is responsible for keeping the term ‘intentional’ alive in philosophical discussions of the last century or so, with something like its current use, and was much concerned to understand its relationship with consciousness. However, it is worth noting that Brentano himself was very aware of the deep historical background to his notion of intentionality: He looked back through scholastic discussions (crucial to the development of Descartes' immensely influential theory of ideas), and ultimately to Aristotle for his theme of intentionality. One may go further back, to Plato's discussion (in the Sophist, and the Theaetetus) of difficulties in making sense of false belief, and yet further still, to the dawn of Western Philosophy, and Parmenides' attempt to draw momentous consequences from his alleged finding that it is not possible to think or speak of what is not.

In Brentano's treatment what seems crucial to intentionality is the mind's capacity to ‘refer’ or be ‘directed’ to objects existing solely in the mind~what he called ‘mental or intentional inexistence.’ It is subject to interpretation just what Brentano meant by speaking of an object existing only in the mind and not outside of it, and what he meant by saying that such ‘immanent’ objects of thought are not ‘real.’ He complained that critics had misunderstood him here, and appears to have revised his position significantly as his thought developed. But it is clear at least that his conception of intentionality is dominated by the first strand in thought about intentionality mentioned above~intentionality as ‘directedness toward an object’~and whatever difficulty that brings in the point.

Brentano's conception of the relation between consciousness and intentionality can be brought out partly by noting he held that every conscious mental phenomenon is both directed toward an object, and always (if only ‘secondarily’) directed toward itself. (That is, it includes a ‘presentation’~and ‘inner perception’~of itself). Since Brentano also denied the existence of unconscious mental phenomena, this amounts to the view that all mental phenomena are, in a sense ‘self~presentational.’

His lectures in the late nineteenth century attracted a diverse group of central European intellectuals (including that great promoter of the unconscious, Sigmund Freud) and the problems raised by Brentano's views were taken up by a number of prominent philosophers of the era, including Edmund Husserl, Alexius Meinong, and Kasimir Twardowski. Of these, it was Husserl's treatment of the Brentanian theme of intentionality that was to have the widest philosophical influence on the European Continent in the twentieth century~both by means of its transformation in the hands of other prominent thinkers who worked under the aegis of ‘phenomenology’~such as Martin Heidegger, Jean~Paul Sartre, and Maurice Merleau~Ponty~and through its rejection by those embracing the ‘deconstructionism’ of Jacques Derrida.

In responding to Brentano, Husserl also adopted his concern with properly understanding the way in which thought and experience are ‘directed toward objects.’ Husserl criticized Brentano's doctrine of ‘inner perception,’ and did not deny (even if he did not affirm) the reality of unconscious mentation. But Husserl retained Brentano's primary focus on describing conscious ‘mental acts.’ Also he believed that knowledge of one's own mental acts rests on an ‘intuitive’ apprehension of their instances, and held that one is, in some sense, conscious of each of one's conscious experiences (though he denied this meant that every conscious experience is an object of an intentional act). Evidently Husserl wished to deny that all conscious acts are objects of inner perception, while also affirming that some kind of reflexivity~one that is, however, neither judgment~like nor sense~like~is essentially built into every conscious act. But the details of the view are not easy to make out. (A similar (and similarly elusive) view was expressed by Jean~Paul Sartre in the doctrine that ‘All consciousness is a non~positional consciousness of itself.’

One of Husserl's principal points of departure in his early treatment of intentionality (in the Logical Investigations) was his criticism of (what he took to be) Brentano's notion of the ‘mental inexistence’ of the objects of thought and perception. Husserl thought it a fundamental error to suppose that the object (the ‘intentional object’) of a thought, judgment, desire, etc. is always an object ‘in’ (or ‘immanent to’) the mind of the thinker, judger, or desirer. The objects of one's ‘mental acts’ of thinking, judging, etc. are often objects that ‘transcend,’ and exist independently of these acts (states of mind) that are directed toward them (that ‘intend’ them, in Husserl's terms). This is particularly striking, Husserl thought, if we focus on the intentionality of sense perception. The object of my visual experience is not something ‘in my mind,’ whose existence depends on the experience~but something that goes beyond or ‘transcends’ any (necessarily perspectival) experience I may have of it. This view is phenomenologically based, for (Husserl says), the object is experienced as perspectivally given, hence as ‘transcendent’ in this sense.

In cases of hallucination, we should say, on Husserl's view, not that there is an object existing ‘in one's mind,’ but that the object intended does not exist at all. This does not do away with the ‘directedness’ of the experience, for that is properly understood (according to the Logical Investigations) as it is having a certain ‘matter’~ where the matter of a mental act is what may be common to different acts, when, for example, one believes that it will not rain tomorrow, and hopes that it will not rain tomorrow. The difference between the mental acts illustrated (between hoping and believing) Husserl would term a difference in their ‘quality.’ Husserl was to re~interpret his notions of act~matter and quality as components of what he called (in Ideas, 1983) the ‘noema’ or ‘noematic structure’ that can be common to distinct particular acts. So intentional directedness is understood not as a relation to special (mental) objects toward which one is directed, but rather: as the possession by mental acts of matter/quality (or later, ‘noematic’) structure.

This unites Husserl's discussion with the ‘content’ conception of intentionality described above: he himself would accept that the matter of an act (later, its ‘noematic sense’) is the same as the content of judgment, belief, desire, etc., in one sense of the term (or rather, in one sense he found in the ambiguous German ‘Gestalt’). However, it is not fully clear how Husserl would view the relationship between either act~matter and noematic sense quite generally and such semantic correlates of ordinary language sentences that some would identify as the contents of states of mind reported in them. Nonetheless, this is a difficulty partly because of his later emphasis (e.g., in Experience and Judgment) on the importance of what he called ‘pre~predicative’ experience. He believed that the sort of judgments we express in ordinary and scientific languages are ‘founded on’ the intentionality of pre~predicative experience, and that it is a central task of philosophy to clarify the way in which such experience of our surroundings and our own bodies underlies judgment, and the capacity it affords us to construct an ‘objective’ conception of the world. Prepredicative experience’s are, paradigmatically, sense experience as it is given to us, independently of any active judging or predication. But did Husserl hold that what makes such experience pre~predicative is that it altogether lacks the content that is expressed linguistically in predicative judgment, or did he think that such judgment merely renders explicitly a predicative content that even ‘pre~predicative’ experience already (implicitly) has? Just what does the ‘pre~’ in ‘pre~predicative’ entail?

Perhaps this is not clear. In any case, the theme of a type of intentionality more fundamental than that involved in predicative judgments that ‘posit’ objects, and to be found in everyday experience of our surroundings, was taken up, in different ways, by later phenomenologists, Heidegger and Merleau~Ponty. The former describes a type of ‘directed’ ‘comportment’ toward beings in which they ‘show themselves’ as ‘ready~to~hand. Heidegger thinks this characterizes our ordinary practical involvement with our surroundings, and regards it as distinct from, and somehow providing a basis for, entities showing themselves to us as ‘present~at~hand’ (or ‘occurrent’)~as they do when we take of less context~bound, and more in a theoretical stance toward the world. Later, Merleau~Ponty (1949~1962), influenced by his study of Gestalt psychology and neurological case studies describing pathologies of perception and action, held that normal perception involves a consciousness of place tied essentially to one's capacities for exploratory and goal~directed movement, which is indeterminate relative to attempts to express or characterize it in terms of ‘objective’ representations~though it makes such an objective conception of the world possible.

Whether Heidegger and Merleau~Ponty's moves in these directions actually contradict Husserl, they clearly go beyond what he says. Another basic, exegetically complex, apparent difference between Husserl and the two later philosophers, pertinent to the relationship of consciousness and intentionality, there lies the disputation over Husserl's proposed ‘Phenomenological reduction.’ Husserl claimed it is possible (and, indeed, essential to the practice of phenomenology) that one conduct and investigation into the structure of consciousness that carefully abstains from affirming the existence of anything in spatial~temporal reality. By this ‘bracketing’ of the natural world, by reducing the scope of one's assertions first to the subjective sphere of consciousness, then to its abstract (or ‘ideal’) atemporal structure, one is able to apprehend what consciousness. Its various forms essentially are, in a way that supplies a foundation to the philosophical study of knowledge, meaning and value. Both Heidegger and Merleau~Ponty (along with a number of Husserl's other students) appear to have questioned whether it is possible to reduce one's commitments as thoroughly as Husserl appears to have prescribed through a ‘mass abstention’ from judgment about the world, and thus whether it is correct to regard one's intentional experience as a whole as essentially detachable from the world at which it is directed. Seemingly crucial to their doubts about Husserl's reduction is their belief that an essential part of intentionality consists in a distinctively practical involvement with the world that cannot be broken by any mere abstention from judgment.

The Phenomenological themes just hinted at (the notion of a ‘pre~predicative’ type of intentionality; the (un)detachability of intentionality from the world) link with issues regarding consciousness and intentionality as these are understood outside the Phenomenological tradition~in particular, the notion of non~conceptual content, and the internalism/externalism debate, to be considered in Section (4). But it is by no means a straightforward matter to describe these links in detail. Part of the reason lies in the general difficulty in being clear about whether what one philosopher means by ‘consciousness’ (or its standard translations) is close enough to what another means for it to be correct to see them as speaking to the same issues. And while some of the Phenomenological philosophers (Brentano, Husserl, Sartre) make thematically central use of terms cognate with ‘consciousness’ and ‘intentionality,’ and consider questions about intentionality first and foremost as questions about the intentionality of consciousness, they do not explicitly address much that (in the latter half of the twentieth century) came to seem problematic about consciousness and intentionality. Is their ‘consciousness’ the phenomenal kind? Would they reject theories of consciousness that reduce it to a species of access to content? If so, on what grounds? (Given their interest in the relation of consciousness, inner perception, and reflection, it may be easier to discern what their stances on reductive ‘higher order representation’ theories of consciousness would be.)

In some ways the situation is more difficult still in the cases of Merleau~Ponty and Heidegger. For the former, though he willingly enough uses’ words standardly translated as ‘consciousness’ and ‘intentionality,’ says little to explain how he understands such terms generally. And the latter deliberately avoid these terms in his central work, Being and Time, in order to forge a philosophical vocabulary free of errors in which they had, he thought, become enmeshed. However, it is not obvious how to articulate the precise difference between what Heidegger rejects, in rejecting the alleged error~laden understanding of ‘consciousness’ and ‘intentionality’, or their German translations, by what he accepts when he speaks of being to his ‘showing’ or ‘disclosing’ them to us, and of our ‘comportment’ directed toward them.

Nevertheless, one can plausibly read Brentano's notion of ‘presentation’ as equivalent to the notion of phenomenally conscious experience, as this is understood in other writers. For Brentano says, ‘We speak of presentation whenever something appears to us.’ And one may take ways of appearing as equivalent to ways of seeming, in the sense proper to phenomenal consciousness. Further, Brentano's attempt to state that through his analysis as described through that of ‘descriptive or Phenomenological psychology,’ became atypically based on how intentional manifestations are to present of themselves, the fundamental kinds to which they belong and their necessary interrelationships, may plausibly be interpreted as an effort to articulate the philosophical salient, highly general phenomenal character of intentional states (or acts) of mind. And Husserl's attempts to delineate the structure of intentionality as it is ‘given’ in consciousness, as well as the Phenomenological productions of Sartre, can arguably be seen as devoted to laying bare to thought the deepest and most general characteristics of phenomenal consciousness, as they are found in ‘directed’ perception, judgment, imagination, emotion and action. Also, one might reasonably regard Heideggerean disclosure of the ready~to~hand and Merleau~Ponty's ‘motor~intentional’ consciousness of place as forms of phenomenally conscious experience~as long as one's conception of phenomenal consciousness is not tied to the notion that the subjective ‘sphere’ of consciousness is, in essence, independent of the world revealed through it.

In any event, to connect classic Phenomenological writings with current discussions of consciousness and its relation to intentionality, more background is needed on aspects of the other main current of Western philosophy in the past century particularly relevant to the topic of intentionality~broadly labelled ‘analytic’.

It seems fair to say that recent work in philosophy of mind in the analytic tradition that has focussed on questions about the nature of intentionality (or ‘mental content’) has been most formed not by the writings of Brentano, Husserl and their direct intellectual descendants, but by the seminal discussions of logico~linguistic concerns found in Gottlob Frége's (1892) ‘On Sense and Reference,’ and Bertrand Russell's ‘On Denoting’ (1905).

But Frége and Russell's work comes from much the same era, and from much the same intellectual environment as Brentano and the early Husserl. And fairly clear points of contact have long been recognized, such as: Russell's criticism of Meinong's ‘theory of objects’, derived from the problem of intentionality, which led him to countenance objects, such as the golden mountain, that are capable of being the object of thought, although they do not exist. This doctrine was one of the principal theories of Russell’s theory of definite descriptions. However, it came as part of a complex and interesting package of concepts in the theory of meaning and scholars are not united in supposing that Russell was fair to it.

The similarities between Husserl's meaning/object distinction (in Logical Investigation I) and Frége's (prior) sense/reference distinction. Indeed the case has been influentially made (by Follesdal 1969, 1990) that Husserl's ‘meaning/object’ distinction is borrowed from Frege (though with a change in terminology) and that Husserl's ‘noema’ is properly interpreted as having the characteristics of Frégean ‘sense.’

Nonetheless, a number of factors make comparison and integration of debates within the two traditions complicated and strenuous. Husserl's notion of noema (hence his notion of intentionality) is most fundamentally rooted, not in reflections on the logical features of language, but in a contrast between the object of an intentional act, and the object ‘as intended’ (the way in which it is intended), and in the idea that a structure would remain to perceptual experience, even if it were radically non~veridical. And what Husserl seeks is a ‘direct’ characterization of this (and other) kinds of experience from the point of view of the experiencer. On the other hand, Frége and Russell's writings bearing on the topic of intentionality concentrate mainly and most explicitly on issues that grow from their own pioneering achievements in logic, and have given rise to ways of understanding mental states primarily through questions about the logic and semantics of the language used to speak of them.

Broadly speaking, logico~linguistic concerns have been methodologically and thematically dominant in the analytic Frége~Russell tradition, while the Phenomenological Brentano~Husserl lineage is rooted in attempts to characterize experience as it is evident from the subject's point of view. For this reason perhaps, discussions of consciousness and intentionality are more obviously intertwined from the start in the Phenomenological tradition than in the analytic one. The following sketch of relevant background in the latter case will, accordingly, most directly concern the treatment of intentionality. But by the end, the bearing of this on the treatment of consciousness in analytic philosophy of mind will have become more evident, and it will be clearer how similar issues concerning the consciousness~intentionality relationship arise in each tradition.

Central to Frége's legacy for discussions of mental or intentional content has been his distinction between ‘sense’ (Sinn) and ‘reference’ (Bedeutung), and his application in his distinction is to cope with an apparent failure of substitutivity to something of an ordinary co~referential expression. In that contexts created by psychological verbs, the sort mentioned in exposition of the notion of mental content~a task important to his development of logic. The need for a distinction between the sense and reference of an expression became evident to Frége, when he considered that, even if ‘a’ is identical to ‘b’, and you understand both ‘a’ and ‘b,’ still, it can be for you a discovery, an addition to your knowledge, that a = b. This is intelligible, Frege thought, only if you have different ways of understanding the expressions ‘a’ and ‘b’~only if they involve for your distinct ‘modes of presentation’ of the self~same object to which they refer. In Frége's celebrated example: you may understand the expressions ‘The Morning Star’ and ‘The Evening Star’ and use them to refer to what is one and the same object~the planet Venus. But this is not sufficient for you to know that the Morning Star is identical with the Evening Star. For the ways in which an object (‘the reference’) is ‘given’ to your mind when you employ these expressions (the senses or Sinne you ‘grasp’ when you use them) may differ in such a manner that ignorance of astronomy would prevent your realizing that they are but two ways in which the same object can be given.

The relevance of all this to intentionality becomes clearer, once we see how Frege applied the sense/reference distinction to whole sentences. The sentence, ‘The Evening Star = The Morning Star’ has a different sense than the sentence ‘The Evening Star = The Evening Star’, even if their reference (according to Frége, their truth value) is the same. The failure of substitutivity of co~referential expressions in ‘that p’ contexts created by psychological verbs can consequently be understood (Frége proposed) in this way: The reference of the terms shifts in these contexts, so that, for example, ‘the Evening Star’ no longer refers to its customary reference (the planet Venus), but to a sense that functions, for the subject of the verb (the person who thinks, judges, desires) as his or her mode of presentation of this object. The sentence occurring in this context no longer refers to its truth value, but to the sense in which the mode of presentation is embedded~which might otherwise be called the ‘thought’~or, by other philosophers, the ‘content’ of the subject's state of mind. This thought or content representation is to be understood not as a mental image, or literally as anything essentially private is the assemblage of its thinking mind~but as one and the same abstract entity that can be ‘grasped’ by two minds, and that must be so grasped if communication is to occur.

While on the surface this story may appear to be only about logic and semantics, and though Frege did not himself elaborate a general account of intentionality, what he says readily suggests the following picture. Intentional states of mind~thinking about Venus, wishing to visit it~involve some special relation (such as ‘mental grasping’)~ not ‘in one's mind,’ nor to any imagery, but to an abstractive entity, a thought, which also constitutes the sense of a linguistic expression that can be used to report one's state of mind, a sense that is grasped or understood by speakers who use it.

This style of account, together with the Frégean thesis that ‘sense determines reference,’ and the history of criticisms both have elicited, form much of the background of contemporary discussions of mental content. It is often assumed, with Frege, that we must recognize (as some thinkers in the empiricist tradition allegedly did not) that thoughts or contents cannot consist in images or essentially private ‘ideas.’ But philosophers have frequently criticized Frége's view of thought as some abstract entity ‘grasped’ or ‘present to’ the mind, and have wanted to replace Frége's unanalyzed ‘grasping’ with something more ‘naturalistic.’

Relatedly, it may be granted that the content of the thought reported is to be identified with the sense of the expression with which we report it. But then, it is argued, the identity of this content will not be determined individualistically, and may, in some respect’s lay beyond the grasp (or not be fully ‘present to’ the mind of) the psychological subject. For what determines the reference of an expression may be a natural causal relation to the world~as influentially argued is true for proper names, like ‘Nixon’ and ‘Cicero,’ and ‘natural kind’ terms like ‘gold’ and ‘water.’ Or (as Tyler Burge (1979) has influentially argued) two speakers who, considered as individuals, are qualitatively the same, may nevertheless each assert something different simply because of differing relations they bear to their respective linguistic communities. (For example, what one speaker's utterance of ‘arthritis’ means is determined not by what is ‘in the head’ of that speaker, but by the medical experts in his or her community.) And, if referentially truth conditions of expressions by which one's thought is reported or expressed are not determined by what is in one's head, and the content of one's thought determines their reference and truth conditions, then the content of one's thought is also not determined individualistically. Rather, it is necessarily bound up with one's causal relations to certain natural substances, and one's membership in a certain linguistic community. Both linguistic meaning and mental contents are ‘externally’ determined.

The development of such ‘externalist’ conceptions of intentionality informs the reception of Russell's legacy in contemporary philosophy of mind as well. Russell also helped to put in play a conception of the intentionality of mental states, according to which each such state is seen as involving the individual's ‘acquaintance with a proposition’ (counterpart to Frégean ‘grasping’)~which proposition is at once both what is understood in understanding expressions by which the state of mind is reported, and the content of the individual's state of mind. Thus, intentional states are ‘propositional attitudes.’ Also importantly, Russell's famous analysis of definite descriptions into phrases employing existential quantifiers and general predicates underlay many subsequent philosophers' rejection of any conception of intentionality (like Meinong's) that sees in it a relation to non~existent objects. And, Russell's treatment drew attention to cases of what he called ‘logically proper names’ that apparently defies such analysis in descriptive terms (paradigmatically, the terms ‘this’ and ‘that’), and which (he thought) thus must refer ‘directly’ to objects. Reflection on such ‘demonstratives’ and ‘indexical’ (e.g., ‘I,’ ‘here,’ ‘now’) reference has led some to maintain that the content of our states of mind cannot always be constituted by Frégean senses but must be seen as consisting partly of the very objects in the world outside our heads to which we refer, demonstratively, indexically~another source of support for an ‘externalist’ view of mental content, hence, of intentionality.

Yet another important source of externalist proclivities in twentieth century philosophy lies in the thought that the meaningfulness of a speaker's utterances depends on its potential intelligibility to hearers: language must be public~an idea that has found varying and influential expression in the work of Ludwig Wittgenstein, W.V.O. Quine, and Donald Davidson. This, coupled with the assumption that intentionality (or ‘thought’ in the broad (Cartesian) sense) must be expressible in language, has led some to conclude that what determines the content of one's mind must lie in the external conditions that enable others to attribute content.

However, the movement from Frege and Russell toward externalist views of intentionality should not simply be accepted as yielding a fund of established results: it has been subject to powerful and detailed challenges, but without plunging into the details of the internalism/externalism debate about mental content, we can recognize, in the issues just raised, certain themes bearing particularly on the connection between consciousness and intentionality.

For example: it is sometimes assumed that, whatever may be true of content or intentionality, the phenomenal character of one's experience, at least, is ‘fixed internally’ ~, i.e., it involves no necessary relations to the nature of particular substances in one's external environment or to one's linguistic community. But then the purported externalist finding that meaning nor contents are ‘in the head’ and, of course, be read as showing the insufficiency of phenomenal consciousness to determine any intentionality or content. Something like this consequence is drawn by Putnam (1981), who takes the stream of consciousness to comprise nothing more than sensations and images, which (as Frege saw) should be sharply distinguished from thought and meaning. This interpretation of the import of externalist arguments may be reinforced by a tendency to tie (phenomenal) consciousness to non~intentional sensations, sensory qualities, or ‘raw feels,’ and hence to dissociate consciousness from intentionality (and allied notions of meaning and reference), a tendency that has been prominent in the analytic tradition.

But it is not at all evident that externalist theories of content require us to estrange consciousness from intentionality. One might argue (as do Martin Davies (1997) and Fred Dretske (1997)) that in certain relevant respects the phenomenal character of experience is also essentially determined by causal environmental connections. By contrast, one may argue (as do Ludwig (1996b) and Horgan and Tienson (2002)) that since it is conceivable that a subject has experience is much like our own in phenomenal character, but radically different in external causes from what we take our own to be (in the extreme case, a mind bewitched by a Cartesian demon into massive hallucination), there must indeed be a realm of mental content that is not externally determined.

One other aspect of the Frége~Russell tradition of theorizing about content that impinges on the consciousness/intentionality connection is this. If ‘content’ is identified with the sense or the truth~condition determiners of the expressions used in the object~clause reporting intentional states of mind, it will seem natural to suppose that possession of mental content requires the possession of conceptual capacities of the sort involved in linguistic understanding~‘grasping senses.’ But then, to the extent the phenomenal character of experience is inadequate to endow a creature with such capacities, it may seem that phenomenal consciousness has little to do with intentionality.

However, this raises large issues. One is this: it should not be granted without question that the phenomenal character of our experience could be as it is in the absence to the sorts of conceptual capacities sufficient for (at least some types of) intentionality. And this is tied to the issue of whether or not the phenomenal character of experience is (as some suppose) a purely sensory affair. Some would maintain, on the contrary, that thought (not just imagistic, but conceptual thought) has phenomenal character too. If so, then it is very far from clear that phenomenal character can be divorced from whatever conceptual capacities are necessary for intentionality.

Moreover, we may ask: Are concepts, properly speaking, always necessary for intentionality anyway? Here another issue rears its head: is there not perhaps a form of sensory intentionality, which does not require anything as distinctively intellectual or conceptual as is needed for the grasping of linguistic senses or propositions? (This presumably would be a kind of intentionality had by the pre~linguistic (e.g., babies) or by non~linguistic creatures (e.g., dogs).) Suppose that there is, and that this type of intentionality is inseparable from the phenomenal character of perceptual experience. Then, even if one assumes that such phenomenal consciousness is insufficient to guarantee the possession of concepts, it would be wrong to say that it has little to do with intentionality. (Advocates of varying versions of the idea that there is a distinctively ‘non~conceptual’ kind of content include Bermudez 1998, Crane 1992, Evans 1982, Peacocke 1992, and Tye 1995~for a notable voice of opposition to this trend, see McDowell 1994.) A deep difficulty in assessing these debates lies in getting an acceptable conception of concepts with which to work. We need to understand clearly what ‘having a concept of F’ does and does not require, before we can be clear about the content of and justification for the thesis of non~conceptual content.

These proposals about non~conceptual content bear some affinity with aspects of the Phenomenological tradition eluded too earlier: Husserl's notion of ‘pre~predicative’ experience as to Heidegger's procedures of ‘ready~to~hand;’ and Merleau~Ponty's idea that in normal active perception we are conscious of place, not via a determinate ‘representation’ of it, but rather, relative to our capacities for goal~directed bodily behaviour. Though to see the extent to which any of these are ‘non~conceptual’ in character would require not only more clarity about the conceptual/non~conceptual contrast, save that a considerable novel exegesis of these philosophers' works.

Also, one may plausibly try to find an affinity between externalist views in analytic philosophy, and the later phenomenologists' rejection of Husserl's reduction, based on their doubt that we can prise consciousness off from the world at which it is directed, and study its ‘intentional essence’ in solipsistic isolation. But if externalism can be defined broadly enough to encompass Heidegger, Merleau~Ponty, Kripke, and Burge, still the comparison is strained when we take account of the different sources of ‘externalism’ in the phenomenologists. These have to do it seems (very roughly) with the idea that the way we are conscious of things (or at least, for Heidegger, the way they ‘show themselves’ to us) in our everyday activity cannot be quite generally separated from our actual engagement with entities of which we are thus conscious (which show themselves in this way). Also relevant is the idea that one's use of language (hence one's capacity for thought) requires gearing one's activity to a social world or cultural tradition, in which antecedently employed linguistic meaning is taken up and made one's own through one's relation with others. All this is supposed to make it infeasible to study the nature of intentionality by globally uprooting, in thought, the connection of experience with one's spatial surroundings (and~crucially for Merleau~Ponty~one's own body), and one's social environment. Whatever the merits of this line of thought, we should note: Neither a causal connection with ‘natural kinds’ unmediated by reference~determining ‘modes of presentation,’ nor deference to the linguistic usage of specialists, nor belief in the need to reconstruct speakers’ meaning from observed behaviour, plays a role in the phenomenologists' doubts about the reduction.

The arduous exegesis required for a clearer and more detailed comparison of these views is not possible here. Nevertheless, following some of the main lines of thought in treatments of intentionality, descending on the one hand, primarily from Brentano and Husserl, and on the other, from Frége and Russell, certain fundamental issues concerning its relationship to consciousness have emerged. These include, first, the connection between consciousness and self~directed and self~reflexive intentionality. (It has already been seen that this topic preoccupied Brentano, Husserl and Sartre; its emergence as an important issue in analytic philosophy of mind will become more evident below, Second, there is concern with the way in which (and the extent to which) mind is world~involving. (In the Phenomenological tradition this can be seen in controversy over Husserl's Phenomenological reduction; That within Frégean cognitive traditions are exhibited through some formal critique as drawn upon sensationalism, in which only internalism/externalism are argued traditionally atypically in the passage through which are formally debated. Third, there is the putative distinction between conceptual and theoretical, and sensory or practical forms of intentionality. (In phenomenology this shows up in Husserl's contrast between judgment and pre~predicative experience, and related notions of his successors; In analytic philosophy this shows up in the (more recent) attention to the notion of ‘non~conceptual’ content.)

For more clarity regarding the consciousness~intentionality relationship and how these three topics figure prominently in views about it, it is necessary now to turn attention back to philosophical disagreements regarding consciousness that abruptly have of an abounding easy to each separate relation, til their distinctions have of occurring.

Consider the proposal that sense experience manifests a kind of intentionality distinct from and more basic than that involved in propositional thought and conceptual understanding. This might help form the basis for an account of consciousness. Perhaps conscious states of mind are distinguished partly by their possession of a type of content proper to the sensory subdivision of mind.

One source of the idea that a difference in type of content helps constitute a distinction between what is and is not phenomenally conscious, lies in the apparent distinction between sense experience and judgment. To have conscious visual experience of a stimulus~for it to look some way to you~is one thing. To make judgments about it is something else. (This seems evident in the persistence of a visual illusion, even once one has become convinced of the error.) However, on some accounts of consciousness, this distinction itself is doubtful, since conscious sense experience is taken to be nothing more than a form of judging. However, such to this view is expressed by Daniel Dummett (1991), who takes the relevant form of judging to consist in one's possession of information or mental content available to the appropriate sort of ‘probes’~the availability of content he calls ‘cerebral celebrity.’ For Dummett what distinguishes conscious states of mind is not their possession of a distinctive type of intentional content, but rather the richness of that content, and its availability to the appropriate sort of cognitive operations. (Since the relevant class of operations is not sharply defined, neither, for Dummett, is the difference between which states of mind are conscious and which are not.)

Recent accounts of consciousness that, by contrast, give central place to a distinction between (conceptual) judgment and (non~conceptual~but still intentional) sense~experience includes Michael Tye's (1995) theory, holding that it is (by metaphysical necessity) sufficient to have a conscious sense~perception that some representation of sensory stimuli is formed in one's head, ‘map~like’ in character, whose (‘non~conceptual’) content is ‘poised’ to affect one's (conceptual) beliefs. This form of mental representation Tye would contrast with the ‘sentential’ form proper to belief and judgment~and in that way, he might preserve the judgment/experience contrast as Dummett does not. Consider also Fred Dretske's (1995) view, that phenomenally conscious sensory intentionality consists in a kind of mental representation whose content is bestowed through a naturally selected ‘function to indicate.’ Such natural (evolution~implanted) sensory representation can arise independently of learning (unlike the more conceptual, language dependent sort), and is found widely distributed among evolved lives.

Both Tye and Dretske's views of consciousness (unlike Dummett's) make crucial use of a contrast between the types of intentionality proper to sense~experience, and that proper to linguistically expressed judgment. On the other hand, there is also some similarity among the theories, which can be brought out by noting a criticism of Dummett's view, analogues of which arise for Tye and Dretske's views as well.

Some might think Dummett's account concerns only some variety of what Form would call ‘ascensive consciousness’. For on Dummett's account, it seems, to speak of visual consciousness is to speak of nothing over and above the sort of availability of informational content that is evinced in unprompted verbal discriminations of visual stimuli. And this view has been criticized for neglecting phenomenal consciousness. It seems we may conceive of a capacity for spontaneous judgment triggered by and responsive to visual stimuli, which would occur in the absence of the judger's phenomenally conscious visual experience of the stimuli: The stimuli do not look in any way impulsively subjective, and yet they trigger accurate judgments about their presence. The notion of such a (hypothetical) form of ‘blind~sight’ may be elaborated in such a way that we conceive of the judgment it affords for being at least as finely discriminatory (and as fine in informational content) as that enjoyed by those with extremely poor, blurry and un~acute conscious visual experience (as in the ‘legally blind’). But a view like Dummett's seems to make this scenario inconceivable.

However, this kind of criticism does not concern only those theories that would elide any experience/judgment distinction. For Tye and Dretske's theories, though they depend on forms of that contrast (and are offered as theories of phenomenal consciousness), can raise similar concerns. For one might think that the hypothetical blind~sighter would be as rightly regarded as having Tye ‘support’ some maplike representations in her visual system as would be someone with a comparable form of conscious vision. And one might find it unclear why we should think the visual system of such a blind~sighter must be performing naturally endowed indicating functions more poorly than the visual system of a consciously sighted subject would.

Whatever the cogency of these concerns, one should note their distinctness from the issues about ‘kinds of intentionality’ that appear to separate both Tye and Dretske from Dummett. The notion that there is a fundamental distinction to be drawn in kinds of intentional content (separating the more intellectual from the more sensory departments of mind) sometimes forms the basis of an account of consciousness (as with Dretske and Tye's, though not with Dummett's). But it is also important to recognize what unites Dummett, Tye, and Dretske. Despite their differences, all propose to account for consciousness by starting with a general understanding of intentionality (or mental content or representation) to which consciousness is inessential. Dummett is known for an uncompromising re~evaluation of the Western tradition, viewing writings before the rise of anaclitic philosophy as fatty and flawed by having take epistemology to be fundamental, whereas the correct approach, giving a foundational place to a concern with language, only took to a point~start with the work of Frége. Equally, the supposedly pure investigation of language in the 20th century has often kept some dubious epistemological and metaphysical company.

They then offer to explain consciousness as a special case of intentionality thus understood~so, in terms of the operations the content is available for, or the form in which it is represented, or the nature of its external source. The blind~sight~based objection to Dennett, and its possible extension to Dretske and Tye, helps bring this commonality to light. The last of these issues showed how some theories purport to account for consciousness on the basis of intentionality, in a way that focuses attention on attempts to discern a distinctively sensory type of intentionality. A different strategy for explaining consciousness via intentionality highlights the importance of clarity regarding the connection between consciousness and reflexivity. On such a view (roughly): Experiences or states of mind are conscious just insofar as the mind represents itself as having them.

In David Rosenthal's variant of this approach, a state is conscious just when it is a kind of (potentially non~conscious) mental state one has, which one (seemingly without inference) thinks that one is in. A theory of this sort starts with some way of classifying mental states that is supposed to apply to conscious and non~conscious states of mind alike. The proposal then is that such a state is conscious just when it belongs to one of those mental kinds, and the (‘higher order’) thought occurs to the person in that state that he or she is in a state of that kind. So, for example it is maintained that certain non~conscious states of mind can possess ‘sensory qualities’ of various sorts~one may, in a sense, be in pain without feeling pain, one may have a red sensory quality, even when nothing looks red to one. The idea is that one has a conscious visual experience of red, or a conscious pain sensation, just when one has such a red sensory quality, or pain~quality, and the thought (itself also potentially non~conscious) occurs to one that one has a red sensory quality, or pain~quality.

This way of accounting for consciousness in terms of intentionality may, like theories mentioned, provoke the concern that the distinctively phenomenal sense of consciousness has been slighted~though this time, not in favour of some ‘access’ consciousness, but in favour of reflexive consciousness. One focus of such criticism lies in the idea that such higher~order thought requires the possession of concepts~concepts of types of mental states~that may be lacking in creatures with first order mentality. And it is unclear (in fact it seems false to say) these beings would therefore have no conscious sensory experience in the phenomenal sense. Might that they enduringly exist in a way the world looks to rabbits, dogs, monkeys, and human babies, and might they agreeably feel pain, though they lack the conceptual wherewithal to think about their own experience?

One line of response to such concerns is simply to bite the bullet: dogs, babies and the like might altogether lack higher order thought, but that is no problem for the theory because, indeed, they also altogether lack feelings. Rosenthal, for his part, takes a different line: lack of cognitive sophistication need not instantly disqualify one for consciousness, since the possession of primitive mentalistic concepts requires so little that practically any organism we would consider a serious candidate for sensory consciousness (certainly babies, dogs and bunnies) would obviously pass conscription.

A number of additional worries have been raised about both the necessity and the sufficiency of ‘higher order thought’ for conscious sense experience. In the face of such doubts, one may preserve the idea that consciousness consists in some kind of higher order representation~the mind's ‘scanning’ itself~by abandoning ‘higher order thought’ for another form of representation: one that is not thought~like or conceptual, but somehow sensory in character. Maybe somewhat as we can distinguish between primitive sensory perception of things in our environment, and the more intellectual, conceptual operations based on them, so we can distinguish the thoughts we have about our own (‘inner’) mental goings~on from the (‘inner’) sensing of them. And, if we propose that consciousness consist in this latter sort of higher order representation, it seems we will escape the worries occasioned by the Rosenthalian variant of the ‘reflexivist’ doctrine. In considering such theories, two of the consciousness~themes that earlier discern had in coming together, namely the reflexivity of thought, or higher order representations, and, by contrast, between the conceptual and non~conceptual presentations, as sensory data,

Criticism of ‘inner sense’ theories is likely to focus not so much on the thought that such inner sensing can occur without phenomenal consciousness, or that the latter can occur without the former, as on the difficulty in understanding just what inner sensing (as distinct from higher order thought) is supposed to be, and why we should think we have it. It seems the inner sense theorist’s share with those who distinguish between conceptual and non~conceptual (or sensory) flavours of intentionality the challenge of clarifying and justifying some version of this distinction. But they bear the additional burden of showing how such a distinction can be applied not just to intentionality directed at tables and chairs, but at the ‘furniture of the mind’ as well. One may grant that there are non~conceptual sensory experiences of objects in one's external environment while doubting one has anything analogous regarding the ‘inner’ landscape of mind.

It should be noted that, in spite of the difficulties faced by higher order representation theories, they draw on certain perennially influential sources of philosophical appeal. We do have some willingness to speak of conscious states of mind as states we are conscious or aware of being in. It is tempting to interpret this as indicating some kind of reflexivity. And the history of philosophy reveals many thinkers attracted to the idea that consciousness is inseparable from some kind of self~reflexivity of mind. As noted, varying versions of this idea can be found in Brentano, Husserl, and Sartre, as well as we can go further back in which case of Kant’s (1787) who spoke explicitly of ‘inner sense,’ and Locke (1690) defined consciousness as the ‘perception of what passes in a man's mind.’ Brentano (controversially) interpreted Aristotle's enigmatic and terse discussion of ‘seeing that one sees’ in De Anima, as an anticipation of his own ‘inner perception’ view. However, there is this critical difference between the thinkers just cited and contemporary purveyors of higher order representation theories. The former does not maintain, as do the latter, that consciousness consists in one's forming the right sort of higher order representation of a possible non~conscious type of mental state. Even if they think that consciousness is inseparable from some sort of mental reflexivity, they do not suggest that consciousness can, so to speak, be analysed into mental parts, none of which they essentially require consciousness. (Some could not maintain this, since they explicitly deny mentality without consciousness.) There is a difference between saying that reflexivity is essential to consciousness and saying that consciousness just consists in or is reducible to a species of mental reflexivity. Advocacy of the former without advocacy of the latter is certainly possible.

Suppose one holds that phenomenal consciousness is distinguishable both from ‘access’ and ‘reflexivity,’ and that it cannot be explained as a special case of intentionality. One might conclude from this that phenomenal consciousness and intentionality are two composite structures exhibiting of themselves of distinct realms as founded in the psychic domain as called the mental, and embrace the idea that the phenomenal are a matter of non~intentional qualia or raw feels. One important current in the analytic tradition has evinced this attitude~it is found, for example, in Wilfrid Sellars' (1956) distinction between ‘sentience’ (sensation) and ‘sapience.’ Whereas the qualities of feelings involved in the former~mere sensations~require no cognitive sophistication and are readily attributable to brutes, the latter~involving awareness of, awareness that~requires that one have the appropriate concepts, which cannot be guaranteed by just having sensations; one needs learning and inferential capacities of a sort Sellars believed possibly only with language. ‘Awareness,’ Sellars says, ‘is a linguistic affair.’

Thus we may arrive at a picture of mind that places sensation on one side, and thought, concepts, and ‘propositional attitudes’ on the other. If one recognizes the distinctively phenomenal consciousness not captured in ‘representationalist’ theories of the kinds just scouted, one may then want to say: that is because the phenomenal belong to mere sentience, and the intentional to sapience. Other influential philosophers of mind have operated with a similar picture. Consider Gilbert Ryle's (1949) contention that the stream of consciousness contains nothing but sensations that provide ‘no possibility of deciding whether the creature that had these was an animal or a human being; An ignoramus, simpletons, or a sane man, only from which nothing is appropriately asked of whether it is correct or incorrect, veridical or nonveridical. And Wittgenstein's (1953) influential criticism of the notion of understanding as an ‘inner process,’ and of the idea of a language for private sensation divorced from public criteria, could be interpreted in ways that sever (phenomenal) consciousness from intentionality. (Such an interpretation would assume that if consciousness could secure understanding, understanding would be an ‘inner process,’ and if phenomenal character bore intentionality with it, private sensations could impart meaning to words.) Also recall Putnam's conviction that the (internal) stream of consciousness cannot furnish the (externally fixed) content of meaning and belief. A similar attitude is evident in Donald Davidson's distinction between sensation and thought (the former is nothing more than a causal condition of knowledge, while the latter can furnish reasons and justifications, but cannot occur without language). Richard Rorty (1979) makes a Sellarsian distinction between the phenomenal and the intentional key to his polemic against epistemological philosophy overall, and ‘foundationalism’ in particular (and takes a generally deflationary view of the phenomenal or ‘qualitative’ side of this divide).

But it is possible to reject attempts to subsume the phenomenal under the intentional as in the ‘representationalist’ accounts of consciousness variously exemplified in Dennett, Dretske, Lycan, Rosenthal, and Tye, without adopting this ‘two separate realms’ conception. We can believe that there is no conception of the intentional from which the phenomenal can be explanatorily derived that does not already include the phenomenal, but still believe also that the phenomenal character of experience cannot be separated from its intentionality, and that having experience of the right sort of phenomenal character is sufficient for having certain forms of intentionality.

Here one might leave open the question whether there is also some kind of phenomenal character (perhaps that involved in some kinds of bodily sensation or after~images) whose possession is not sufficient for intentionality. (Though if we say there is such non~intentional phenomenal character, this would give us a special reason for rejecting the representationalist explanations of phenomenal consciousness) on the other hand, we say phenomenal character always brings intentionality with it, that might be ‘representational’’ of a sort. But its endorsement is consistent with a rejection of attempts to derive phenomenality from intentionality, or reduce the former to a species of the latter, which commonly attract the ‘representationalist’ label. We should distinguish the question of whether the phenomenal can be explained by the intentional from the question of whether the phenomenal are separable from the intentional.

Closer consideration of two of the three themes earlier identified as common to Phenomenological and analytic traditions is needed to come to grips with the latter question. It is necessary to inquire: (1) whether an externalist conception of intentionality can justify separating phenomenal character from intentionality. And one needs to ask: (2) whether one's verdict on the ‘separability’ question stands or falls with acceptance of some version of a distinction between conceptual and non~conceptual (or distinctively sensory) form of intentionality.

The dialectical situation regarding (1) is complex. One way it may seem plausible to answer question (1) in the affirmative, and restrict phenomenal character and intentionality to different sides of some internal/external divide, is to conduct a Cartesian thought experiment, in which one conceives of consciousness with all its subjective riches surviving the utter annihilation of the spatial realm of nature. (Similarly, but less radical, one may conceive of a ‘brain in a vat’ generating an extended history of sense experience indistinguishable in phenomenal character from that of an embodied subject.) If one is committed to an externalist view of intentionality~but rejects the intentionalizing strategies for dealing with consciousness~one may conclude that phenomenal character is altogether separable from (and insufficient for) intentionality. However, one may draw rather different conclusions from the Cartesian thought experiment~turning it against externalism. It may seem to one that, since the intentionality of experience would apparently survive along with its phenomenal character, one may then infer that the causal tie between the mind's content and the world of objects beyond it that (according to some versions of externalism) fixes content, is in reality and in at least some cases (or for some contents), no more than contingent. Alternatively, whatever one relies on to argue that this or that relation of experience and world is essential to having any intentionality at all, one might take this to show that phenomenal character is also externally determined in a way that renders the Cartesian scenario of consciousness totally unmoored from the world an illusion. And, if Merleau~Ponty or Heidegger thinks that Husserl's Phenomenological reduction to a sphere of ‘pure’ consciousness cannot be completed, and their reasons make them externalists of some sort, it hardly seems to establish that they are committed to a realm of raw sensory phenomenal consciousness, devoid of intentionality. In fact their rejection of Husserl's notion of ‘uninterpreted’ sensory or ‘hyletic’ data in experience would seem to indicate them, at least, would strongly deny they held such views.

In this arena it is far from clear what we are entitled to regard as secure ground and what as ‘up for grabs.’ However, there do seem to be ways in which all would probably admit that the phenomenal character of experience and externally individuated content come apart, ways in which such content goes beyond anything phenomenal consciousness can supply. For the way it seems to me to experience this computer screen may be no different from the way it seems to my twin to experience some entirely distinct one. Thus where intentional contents are distinguished in such a way as to include the particular objects experienced or thought of, phenomenal character cannot determine the possession of content. Still, that does not show that no content of any sort is fixed by phenomenal character. Perhaps, as some would say, phenomenal character determines ‘narrow’ or ‘notional’ content, but not ‘wide’ (externally ‘fixed’) content. Nor is it even clear that we must judge the sufficiency of phenomenal character for intentionality by adopting some general account of content and its individuation (as ‘narrow’ or ‘wide’ for instance), and then ask whether one's possession of content so considered is entailed by the phenomenal character of one's experience. One may argue that the phenomenal character of one's experience suffices for intentionality as long as having it makes one assessable for truth, accuracy (or other sorts of ‘satisfaction’) without the addition of any interpretation, properly so~called, such as is involved in assessment of the truth or accuracy of sentences or pictures.

Even if one does not globally divide phenomenal character from intentionality along some inner/outer boundary line, to address questions of the sufficiency of phenomenal character for intentionality (and thus of the separability of the latter from the former), one still needs to look at question (2) as above, and the potential relevance of distinctions that have been proposed between conceptual and non~conceptual forms of content or intentionality. Again the situation is complex. Suppose one regards the notion of non~conceptual intentionality or content as unacceptable on the grounds that all content is conceptual. But suppose one also thinks it is clear that phenomenal character is confined to sensory experience and imagery, and that this cannot bring with it the rational and inferential capacities required for genuine concept possession. Then one will have accepted the separability of phenomenal consciousness from intentionality. However, one may, by contrast, take the apparent susceptiblity of phenomenally conscious sense experience to assessment for accuracy, without need for additional, potentially absent interpretation, to show that the phenomenal character of experience is inherently intentional. Then one will say that the burden lies on anyone who claims conceptual powers are crucial to such assessability and can be detached from the possession of such experience: They must identify those powers and show that they are both crucial and detachable in this way. Additionally, one may reasonably challenge the assumption that phenomenal consciousness is indeed confined to the sensory realm; One may say that conceptual thought also has phenomenal character. Even if one does not, one may still base one's confidence in the sufficiency of phenomenal character for intentionality on one's confidence that there is a kind of non~conceptual intentionality that clearly belongs essentially to sense experience.

These considerations, we can see that it is critical to answer the following questions in order to decide whether or not phenomenal character is wholly or significantly separable from intentionality. Does every sort of intentionality that belongs to thought and experiences require an external connection, for which phenomenal characters are insufficient?

Does every sort of intentionality that belongs to sense~experience and sensory imageries require conceptual abilities for which phenomenal character is insufficient? And does every sort of intentionality that belongs to thought require conceptual capacities for which phenomenal character is insufficient?

Suppose one finds phenomenal character quite generally inadequate for the intentionality of thought and sense~experience by answering ‘yes’ either to (i), or to both (ii) and (iii). And suppose one makes the plausible (if non~trivial) assumption that what guarantees’ intentionality for neither sensory experience, nor imagery, nor conceptual thought, guarantees no intentionality that belongs to our minds (including that of emotion, desire and intention~for these later presuppose the former). Then one will find phenomenal character altogether separable from intentionality. Phenomenal character could be as it is, even if intentionality were completely taken away. There is no form of phenomenal consciousness, and no sort of intentionality, such that the first suffices for the second.

A more moderate view might merely answer only one of either (ii) or (iii) in the affirmative (and probably (iii) would be the choice). But still, in that case one recognizes some broad mental domain whose intentionality is in no respect guaranteed by phenomenal character. And that too would mark a considerable limitation on the extent to which phenomenal consciousness brings intentionality with it.

On the other hand, suppose that one answer ‘no’ to (i), and to either (ii) or (iii). Now, external connections and conceptual capacities seem to be what we might most plausibly regard as conditions necessary for the intentionality of thought and experience that could be stripped away while phenomenal character remains constant. So if one thinks that actually neither are generally essential to intentionality and removable while phenomenal character persists unchanged, and one can think of nothing else that is essential for thought and experience to have any intentionality, but for which phenomenal character is insufficient, it seems reasonable to conclude that phenomenal character is indeed sufficient for intentionality of some sort. If one has gone this far, it seems unlikely that one will then think that actual differences in phenomenal character still leave massively underdetermined the different forms of intentionality we enjoy in perceiving and thinking. So, one will probably judge that some kind of phenomenal character suffices for, and is inseparable from, many significant forms of intentionality in at least one of these domains (sensory or cognitive): There are many differences in phenomenal character, and many in intentionality, such that you cannot have the former without the latter. If one also rejects both (ii) and (iii), then one will accept that appropriate forms of phenomenal consciousness are sufficient for a very broad and important range of human intentionality.

Suppose one rejects both the views that consciousness is explanatorily derived from a more fundamental intentionality, as well as the view that phenomenal character is insufficient for intentionality because it is a matter of a purely inward feeling. It seems one might then press farther, and argue for what Flanagan calls ‘consciousness essentialism’~the view that the phenomenal character of experience is not only sufficient for various forms of intentionality, but necessary also.

This type of thesis needs careful formulation. It does not necessarily commit one to a Cartesian (or Brentanian or Sartrean) claim that all states of mind are conscious~a total denial of the reality of the unconscious. A more qualified thesis does seem desirable. Freud's waning prestige has weakened tendencies to assume that he had somehow demonstrated the reality of unconscious intentionality, the rise of cognitive science has created a new climate of educated opinion that also takes elaborate non~conscious mental machinations for granted. Even if we do not acquiesce in this view, we do (and long have) appealed to explanations of human behaviour that recognize some sort of intentional state other than phenomenally conscious experiences and thoughts.

The way of maintaining the necessity of consciousness to mind that can preserve some space for mind that is not conscious is Searle's agreement, roughly, that we should first distinguish between what he calls ‘intrinsic’ intentionality on the one hand, and merely ‘as if’ intentionality, and ‘interpreter relative’ intentionality, on the other. We may sometimes speak as if artifacts (like thermostats) had beliefs or desires~but this is not to be taken literally. And we may impose ‘conditions of satisfaction’ on our acts and creations (words, pictures, diagrams, etc.) by our interpretation of them~but they have no intentionality independent of our interpretive practices. Intrinsic intentionality, on the other hand~the kind that pertains to our beliefs, perception, and intentions~is neither a mere ‘manner of speech,’ nor our possession of it derived from others' interpretive stance toward us. But then, Searle asks, what accounts for the fact that some state of affairs in the world for which in having an intrinsic intentionality~that they are directed at objects under aspects~and why they are directed under the aspects they are (why they have the content they do)? With conscious states of mind, Searle says, their phenomenal or subjective character determines their ‘aspectual shape.’ Where non~conscious states of mind are concerned, there is nothing to do the job, but their relationship to consciousness. The right relationship, he holds, is this non~conscious state of mind and must be ‘potentially conscious.’ If some psychological theories (of language, of vision) postulated an unconscious so deeply buried that its mental representations cannot even potentially become conscious, so much the worse for those theories.

Searle's views have aroused a number of criticisms. Among the problems areas are these. First, how are we to explain the requirement that intrinsically intentional states be ‘potentially conscious,’ without making it either too easy or too difficult to satisfy? Second, just why is it that the intrinsic intentionality of non~conscious states need’s accounting for, while conscious states are somehow unproblematic. Third, it appears Searle's argument does not offer some general reason to rule out all efforts to give ‘naturalistic’ accounts of conditions sufficient to impose~without the help of consciousness~genuine and not merely interpreter relative intentionality.

Another approach is taken by Kirk Ludwig, who argues that there is nothing to determine whose state of mind a given non~conscious state of mind is, unless that state consists in a disposition to produce a conscious mental state of the right sort. Alleged mental processes that did not tend to produce someone's conscious states of mind appropriately would be no one's, which is to say that they would not be mental states at all. Roughly: consciousness is needed to provide that unity of mind without which there would be no mind. And Ludwig argues that it is therefore a mistake to attribute many of the unconscious inferences with which psychological theorists have long been wont to populate our minds.

The persuasiveness of Searle and Ludwig's arguments depends heavily on demonstrating the failure of alternative accounts of the job that they enlist consciousness to do (such as secure ‘aspectual shape,’ or ownership). One may grant (as does Colin McGinn 1991) that phenomenal character is inseparable from intentionality, but cannot be explained by it, while still maintaining that genuine intentionality (mental content) is quite adequately imposed on animal brains by their acquisition of natural functions of content~bearing~in which consciousness evidently plays no essential role. Or one may (like Jerry Fodor 1987) maintain a robust realist ‘representational theory of mind,’ proposing that the content of mental symbols is stamped on them by their being in the ‘right causal relation’ to the world~while despairing of the prospects for a credible naturalistic theory of consciousness.

The preceding discussion has conveyed some of the complexities and potential ambiguities in talk of ‘consciousness’ and ‘intentionality’ that must be appreciated if one is to resolve questions about the relationship between consciousness and intentionality with any clarity. Brief surveys of relevant aspects of Phenomenological and analytic traditions have brought out some shared areas of interest, namely: The relationship of consciousness to reflexivity and ‘self~directed’ intentionality manages to distinguish events between conceptual and non~conceptual (or sensory) forms of intentionality, and the concerns with which the extent is characterized by either conscious experience or intentional states of mind is essentially ‘world~involving.’ These concerns were seen to bear on attempts to account for consciousness in terms of intentionality, and on questions that arise even if those attempts are rejected~questions regarding the separability of phenomenal consciousness and intentionality. Some attention is given to views that, in some sense, reverse the order of explanation proposed by intentionalizing views of consciousness, and take the facts of consciousness to explain the facts of intentionality. Now it is possible to step back and distinguish four general views of the consciousness~intentionality relationship discernable in the philosophical positions canvassed above, as follows.

(1) Consciousness is explanatorily derived from intention

(2) Consciousness is explanatorily derived from intentionality.

(3) Consciousness is underived and separable from intentionality.

(4) Consciousness is underived but also inseparable from intentionality.

(5) Consciousness is underived from, inseparable from, and essential to intentionality.

To adopted view (1) is to accept some intentionalizing strategy with respect to consciousness, such as is variously represented by Dennett, Dretske, Lycan, Rosenthal, and Tye. These views differ importantly among themselves. Their differences have much to do with how they treat consciousness~reflexivity issues and the conceptual/non~conceptual (or conceptual/sensory) contrast, and how they view the intersection between the two. But if we accept (1), then our answer to the question of what consciousness has to do with intentionality will ultimately be given in some prior general account of content or intentionality. And there will be no special issue regarding the internal or external fixation of the phenomenal character of experience, over and above what arises for mental content generally.

On the other hand, suppose one reject (1), and holds that experiences are conscious in a phenomenal sense that does not yield to an approach in which one conceives of intentionality (or content, or information bearing) independently of consciousness, and then, by adverting to special operations, or sources, or contents, tells us what consciousness is. At this point, one would face a choice between (2) and (3).

By embracing (2) we yield the ‘raw feel’ in its conception of phenomenality seemingly implicit in Sellars and Ryle. If, on the other hand, we accept (3), we endorse a much more intimate relationship between consciousness and intentionality. Without proposing to account for the former on the basis of the latter, we would hold that phenomenal character is sufficient for intentionality.

But adoption of (3) leaves open a further basic question. Consciousness (of the appropriate sort) may be sufficient, but underived from intentionality. Yet, intentionality does not require consciousness. Thus we come to ask whether having conscious experience of an appropriate sort is necessary to having either sensory or more~than~sensory (conceptual) intentionality. Adopting theses (4), we say ‘yes’~that such intentionality can come only with consciousness~we will probably have gone as far in making consciousness fundamental to mind as one reasonably can. Again, this is not necessarily to deny the reality of non~conscious mental phenomena. But it could, in a broad way, be interpreted as siding with Husserl, Ludwig and Searle in thinking of consciousness as the irreplaceable source of intentionality and meaning.

This abstract list of four options might leave one without a sense of what is at stake in adopting this or that view. Perhaps the positions themselves will become a little clearer if we make explicit four broad areas of philosophical concern to which the choice among them is relevant.

First, they are relevant to the issue of how to conceive of the mind or the domain of psychology as a whole. Is there some unity to the concept of mind or psychologically phenomenal? Is there something that deserves to be considered the essence of the mental? If consciousness can be thoroughly intentionalized (as (1) would have it), maybe (with suitable qualifications), we could uphold the thesis that intentionality is the ‘mark of the mental.’ If we disapprove of (1) and embrace (3), seeing intentionality as inseparable from the phenomenal character of experience, then we still might maintain that both consciousness and intentionality are necessary for real minds~at least, if we adopt (4) as well. But a unified view of the mind seems difficult (if possible) to maintain if one segregates phenomenal character to non~intentional sensation~as in (2). Even if one does not, one may lack a unifying conception of the mental domain, if one is not satisfied with arguments that show that phenomenal consciousness is essential to genuine (not merely ‘as if’ or ‘interpreter derived’) intentionality. In any case, both consciousness and intentionality signify a broad enough psychological categories, in that one's view of their extension and relationship will do much to draw one's map of psychology's terrain.

Second (and relatedly), views about the consciousness~intentionality relationship bear significantly on general questions about the explanation of mental phenomena. One may ask what kinds of things we might try to explain in the mental domain, what sorts of explanations we should seek, and what prospects of success we have in finding them. If we accept (1) and some intentionalizing account of consciousness, we will not suppose as do some (Chalmers 1996, Levine 2001, McGinn 1991, and Nagel 1974) those phenomenal consciousness poses some specially recalcitrant (maybe hopelessly unsolvable) problem for reductive physicalist or materialist explanations. Rather, we will see the basic challenge as that of giving a natural scientific account of intentionality or mental representation. And this indeed is a reason some are attracted to (1). One may believe that it offers us the only hope for a natural scientific understanding of consciousness. The underlying thought is that a science of consciousness must adopt this strategy: First conceive of intentionality (or content or mental representation) in a way that separates it from consciousness, and see intentionality as the outcome of familiar (and non~intentional) natural causal processes. Then, by further specifying the kind of intentionality involved (in terms of its use, its sources, its content), we can account for consciousness. In other words: ‘naturalize’ intentionality, then intentionalized consciousness, and mind has found its place in nature.

However, we should recognize a distinction between those whose envisioned naturalistic explanation would require underlying forms of necessity and impossibility stronger that pertaining to laws of nature generally~such as either conceptual or ‘metaphysical’ necessity~and those who see the link between explanans and explanandum as simply one of natural scientific law. David Chalmers' (1996) proposals for ‘naturalistic dualism’ (unlike those of the aforementioned naturalizers) put him in the second group. He argues that phenomenal consciousness in its various forms supervenes (not conceptually or metaphysically but only as a matter of nature's laws) on functional organization, and that this permits us to envisage (‘non~reductive’) ways of explaining consciousness by appeal to such organization.

Those who reject attempts to explain the phenomenal consciousness via a theory of intentionality still may reasonably proclaim allegiance to ‘naturalism.’ One may take phenomenal consciousness to be, in a sense, psychologically basic (if all that is mental is phenomenally either conscious or intentional, and no intentionalizing account of phenomenal character is feasible). But one might still hold that some non~intentional neuropsychological, or other recognizable physicalist, there to some explanation of the phenomenal character of experience is to be had, because the explanatory link is otherwise to exhibit an appropriately strong conceptual or metaphysical necessity. Only for measures for which are regarded to that of nothing stronger than psychophysical laws of nature are needed to give us the prospect of a natural scientific account of consciousness.

However, if we not only reject intentionalizing accounts of phenomenal character, but also see it as inseparable from intentionality (if we reject both (1) and (2) and agree to whatever problems are attached to physicalist explanations of consciousness will also infect prospects for explaining intentionality~to some extent at least. And this will hold, even if we remain aloof from (d), and do not claim that phenomenal consciousness is essential to intentionality. For if we think that much of the intentionality we have in perceiving, imagining, and thinking is integral to the phenomenal character of such experience, then without a reductive explanation of that phenomenal character, our possession of the intentionality it brings with it will not be reductively explained either.

Finally, it should be noted that if one holds (4), this may have important consequences for what forms of psychological explanation that once acceptably found the remaining agreement stems for that which one's mental processes must have the right relationship to one's conscious experiences to count as one's mental processes at all. If they are right, postulated processes that do not bear this relation to our experiential lives cannot be going on in our minds.

Regardlessly, another enlarging area of concern is of choice, in that between (1)~(4) tells of a direction among proven rights is to continue in having epistemological connections, in that if one is to embrace of (2), and something like a Sellarsian or Davidsonian distinction between sensation and thought, putting phenomenal character exclusively on the ‘sensation’ side, and intentionality exclusively on the ‘thought’ side of this divide, the place of consciousness in a philosophical account of knowledge will likely be meager~at most phenomenal character will be a causal condition, without a role to play in the warrant or justification of claims to knowledge. However, if one takes routes (1) or (3) the situation will appear rather different. If one is to consider the consequent for, either of which is to internationalize consciousness, or else views intentionality as inseparable from phenomenal character, there will then be more room to view consciousness as central to accounts of the warrant involved in first~person (‘introspective’) knowledge of mind, and empirical or perceptual knowledge. Though just how one goes about this, and with what success, will depend on how (if one chooses (1)) one intentionalizes consciousness, and (if one chooses (1) or (3), that will depend on what sort of intentionality or content one thinks phenomenal consciousness brings with it. The place of consciousness in one's understanding of introspective or empirical knowledge will be rather different, depending on how one resolves the issues regarding: Reflexivity, the conceptual/non~conceptual distinction, and externalisms.

A fourth area of philosophical concern we may indicate broadly, closely bound to our conception of the relation of consciousness and intentionality, has to do with value. How intimately is consciously bound up with those features of our own and others' lives that give them intrinsic or non~instrumental value for us? We may think that the pleasure and suffering that demand our ethical concern are necessarily phenomenally conscious~and that this evaluative significance remains even if phenomenal character is non~intentional. However, the more intentionality is seen as inherent to the phenomenal character of experience, the more the latter will be bound to manifestations of intelligence, emotion, and understanding that appear to give human (and perhaps at least another animal life) its special importance for us. It may seem that those opting for (3) share at least this much ground with their intentionalizing opponents who go for (1): They both (unlike those who adopt (2)) are in a position to claim consciousness is crucial to whatever special moral regard we think appropriate only toward those whose psychologies involve a kind of intentionality for which possession of painful or pleasant experience is not sufficient. However, this needs qualification on two counts. First, if one's embrace of (1) includes an intentionalizing strategy that limits phenomenal character to the sensory realm, one will limit the moral significance of phenomenal consciousness accordingly. Second, to those who hold, it may seem their opponents' intentionalizing theories remove from view those very qualities of experience that make life worth living, and so they will hardly seem like allies on the issue of value. Further, if the proponent of hesitant anticipations were in going insofar as to be taken on (4)~conscious essentialism~those who make that additional commitment might wonder how those who do not could ultimately accord the possession of consciousness much greater non~instrumental value than the possession of a sophisticated but totally non~conscious mind.

From this survey it seems fair to conclude that working out a detailed view of the relation between consciousness and intentionality is hardly a peripheral matter philosophically. Potentially it has extensive consequences for one's views concerning these four important, broad topics: (I) The unity of mental phenomena (Do consciousness or intentionality (or both together) somehow unifies the domain of the psychological?) (II) The explanation of mental phenomena (Can consciousness and intentionality are explained separately? (III) Is explaining the one key to explaining the other? Introspective and empirical knowledge (What relation to intentionality would give consciousness a central epistemological role in either?) (IV) The value of human and other animal life. (What relation of consciousness and intentionality (if any) underlies the non~instrumental value we accord ourselves and others?)

We collectively glorify our ability to think as the distinguishing characteristic of humanity; We personally and mistakenly glorify our thoughts as the distinguishing pattern of whom we are. From the inner voice of thought~as~words to the wordless images within our minds, thoughts create and limit our personal world. Through thinking we abstract and define reality, reason about it, react to it, recall past events and plan for the future. Yet thinking remains both woefully underdeveloped in most of us, as well as grossly overvalued. We can best gain some perspective on thinking in terms of energies.

Automatic thinking draws us away from the present, and wistfully allows our thoughts to meander where they would, carrying our passive attention along with them. Like water running down a mountain stream, thoughts running on auto~pilot careens through the spaces of perception, randomly triggering associative links within our vast storehouse of memory. By way of itself, such associative thought is harmless. However, our tendency to believe in, act upon, and drift away with such undirected thought keeps us operating in an automatic mode. Lulled into an inner passivity by our daydreams and thought streams, we lose contact with the world of actual perceptions, of real life. In the automatic mode of thinking, I am completely identified with my thoughts, believing my thoughts are I, and believing that I am the conceptualization forwarded by me to think of thoughts that are sometimes thought as unthinkable.

Another mode of automatic thinking consists of repetitious and habitual patterns of thought. These thought tapes and our running commentary on life, unexamined by the light of awareness, keep us enthralled, defining who we are and perpetuating all our limiting assumptions about what is possible for us. Driving and driven by our emotions, these ruts of thought create our false persona, the mask that keeps us disconnected from others and from our own authentic self. More than any other single factor, automatic thinking hinders our contact with presence, limits our being, and Forms our path. The autopilot of thought constantly calls us away from the most recent or the current of immediacy. Thus, keeping us fixed on the most superficial levels of our being.

Sometimes we even notice strange, unwanted thoughts that we consider horrible or shameful. We might be upset or shaken that we would think such thoughts, but those reactions only serves to sustain the problematic thoughts by feeding them energy. Furthermore, that self~disgust is based on the false assumption that we are our thoughts, that even unintentional thoughts, arising from our conditioned minds, are we. They are not we and we need not act upon or react to them. They are just thoughts with no inherent power and no real message about whom we are. We can just relax and let them go~or not. Troubling thoughts that recur over a long period and hinder our inner work may require us to examine and heal their roots in our conditioning, perhaps with the help of a psychotherapist.

Sensitive thinking puts us in touch with the meaning of our thoughts and enables us to think logically, solve problems, make plans, and carry on a substantive conversation. A good education develops our ability to think clearly and intentionally with the sensitive energy. With that energy level in our thinking brain, no longer totally submerged in the thought stream, we can move about in it, choosing among and directing our thoughts based on their meaning.

Conscious thinking means stepping out of the thought stream altogether, and surveying it from the shore. The thoughts themselves may even evaporate, leaving behind a temporary empty streambed. Consciousness reveals the banality and emptiness of ordinary thinking. Consciousness also permits us to think more powerfully, holding several ideas, their meanings and ramifications in our minds at once.

When the creative energy reaches thought, truly new ideas spring up. Creative thinking can happen after a struggle, after exhausting all known avenues of relevant ideas and giving up, shaping and emptying the stage so the creative spark may enter. The quiet, relaxed mind also leaves room for the creative thought, a clear channel for creativity. Creative and insightful thoughts come to all of us in regard to the situations we face in life. The trick is to be aware enough to catch them, to notice their significance, and if they withstand the light of sober and unbiased evaluation, to act on them.

In the spiritual path, we work to recognize the limitations of thought, to recognize its power over us, and especially to move beyond it. Along with Descartes, we subsist in the realm of ‘thoughts‘, but thoughts are just thoughts. They are not we. They are not who we are. No thought can enter the spiritual realms. Rather, the material world defines the boundaries of thought, despite its power to conceive lofty abstractions. We cannot think our way into the spiritual reality. On the contrary, identification with thinking prevents us from entering the depths. As long as we believe that refined thinking represents our highest capacity, we shackle ourselves exclusively to this world. All our thoughts, all our books, all our ideas wither before the immensity of the higher realms.

A richly developed body of spiritual practices engages of thought, from repetitive prayer and mantras, to contemplation of an idea, to visualizations of deities. In a most instructive and invaluable exercise, we learn to see beyond thought by embracing the gaps, the spaces between thoughts. After sitting quietly and relaxing for some time, we turn our attention toward the thought stream within us. We notice thoughts come and go of their own accord, without prodding or pushing from us. If we can abide in this relaxed watching of thought, without falling into the stream and flowing away with it, the thought stream begins to slow, the thoughts fragment. Less enthralled by our thoughts, we begin to see that we are not our thoughts. Less controlled by, and at the mercy of, our thoughts, we begin to be aware of the gaps between thought particles. These gaps open to consciousness, underlying all thought. Settling into these gaps, we enter and become the silent consciousness beneath thought. Instead of being in our thoughts, our thoughts are in us.

There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.

Recent historical gaps between neuroscience/cognitive science and psychotherapy are being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the ‘unconscious’ and the ‘conscious’ and the relations between the two. It is suggested that these be understood as two independent ‘story generators’~each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.

For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend a considerable reckoning of time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.

The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that ‘human behaviour and all that it entails . . . are a function of the nervous system’ is itself a story used to make sense of observations by some people and not by any other? It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, what I want to do is to explore the implications and significance of the fact that there are different stories and that they might be about the same (some)thing.

In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new stories of itself are stories of conflicting historical narratives . . . what is within being called the ‘nervous system’ but others are free to call the ‘self,’ ‘mind,’ ‘soul,’ or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.

‘Non~conscious Prediction and a Role for Consciousness in Correcting Prediction Errors’ by Regina Pally (Pally, 2004) is the take~off point for my enterprise. Pally is a practising psychiatrist, psychoanalyst, and psychotherapist who have actively engaged with neuroscientists to help make sense of her own observations. I am a neuroscientist who recently spent two years as an Academic Fellow of the Psychoanalytic Centre of Philadelphia, an engagement intended to expand my own set of observations and forms of story~telling. The significance of this complementarity, and of our similarities and differences, is that something will emerge in this commentary.

Many psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities at best, find to some irrelevance, and at worst destructive or are they not the same probability that holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line ‘In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal~oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought.’ Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, ‘making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary.’ Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different things.

Pally suspect, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote that, ‘the sooner we recognize the certainty of the complexity that is highly functional, just as those who recognize the Gestalts under which they leave the reflex physiologist confounded, in fact they support the simplest functions in the sooner that we will see that the previous terminological peculiarities that seem insurmountably carried between the lower levels of neurophysiology and higher behavioural theory simply dissolve away.’

And in 1951 another said: ‘ I am coming more to the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system.’

Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950 through to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was ‘simple’ and ‘mechanistic,’ which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with ‘mind,’ or ‘self,’ or ‘soul,’ since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy, and, of course, move through their own story of evolution over its presented time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.

An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these ‘shared assumptions’ (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that rephrasing Gertrude Stein, that ‘there proves to be the actualization in the exception of there.’ Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of ‘transference’ and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely ‘unconscious,’ and so may cause inappropriate and troubling behaviour in current time seems to be entirely reasonable. So too, are those that constitute her thought, in that of the interactions with which an analyst can help by bringing the model to ‘consciousness’ through the intermediary of recognizing the transference onto the analyst.

The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms that provide a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories to modify another productively, and yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non~isomorphisms.

Unconscious stories and ‘reality.’ Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be but are not that any necessary thoughtful pronunciations inclined for the ‘real world.’ Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two ‘stories’ with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally says, we do not see ‘reality,’ but only have stories to describe it that result from processes of which we are not consciously aware.

All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of ‘reality.’ In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who largely think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well~defined concept of ‘reality’ and, without being fully conscious of it, committing to fact as they do so. And psychotherapists actually make more use of the idea of ‘reality’ than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect ‘traumas’ and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to ‘reality’ in the sense usually meant. They may, in addition, reflect random ‘play,’ putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between ‘story’ and ‘reality,’ each set of stories could usefully be modified by greater attention to the other. Differing concepts of ‘reality’ (perhaps the very concept itself) gets in the way of usefully sharing stories. The mental/cognitive scientists' preoccupation with ‘reality’ as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of stories in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.

The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the ‘neurobiological unconscious’ is the same thing as the ‘psychotherapeutic unconscious,’ and whether the perceived relations between the ‘unconscious’ and the’conscious’ are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?

An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful ‘rubbing of edges’ between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow ‘superior’ to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, as I will point out in the following, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think of many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi~dimensional story that could help with that common problem and perhaps both traditions as well.

A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence, is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling the world and prediction. Error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from a threat, and of approaching good things, generating novel outputs, learning from doing so, and so on.

This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is the area that is so surprisingly rich that it creates, for some people, the puzzle about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.

As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two areas may in fact, turns out to be the same as in many ways that if they are of the same, then its question only compliments in what way are the ‘unconscious’ and the ‘conscious’ of showing to any difference? Where now are the ‘two stories?’ Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) In what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles numerously fewer variables at one time. It is likely that their equalling a host of other differences in style as well, in the handling of number for example, and of time.

In the present context, however, perhaps the most important difference in style is one that Lacan called attention to from a clinical/philosophical perspective~the conscious (conscious processing) have in themselves forwarded by some objective ‘coherence,’ that it attempts to create a story that makes sense simultaneously of all its parts. The unconscious, on the other hand, is much more comfortable with bits and pieces lying around with no global order. To a neurobiologist/cognitive scientist, this makes perfectly good sense. The circuitry embodies that of the unconscious (sub~cortical circuitry?) Is an assembly of different parts organized for a large number of different specific purposes, and only secondarily linked together to try to assure some coordination? The circuitry, has, once, again, to involve in conscious processing (neo~cortical circuitry?) On the other hand, seems to both be more uniform and integrated and to have an objective for which coherence is central.

That central coherence is well~illustrated by the phenomena of ‘positive illusions,’ exemplified by patients who receive a hypnotic suggestion that there is an object in a room and subsequently walk in ways that avoid the object while providing a variety of unrelated explanations for their behaviour. Similar ‘rationalization’ is, of course, seen in schizophrenic patients and in a variety of fewer dramatic forms in psychotherapeutic settings. The ‘coherent’ objective is to make a globally organized story out of the disorganized jumble, a story of (and constituting) the ‘self.’

What all this introduces that which is the mind or brain for which it is actually organized to be constantly generating at least two different stories in two different styles. One, written by conscious processes in simpler terms, is a story of/about the ‘self’ and experienced as such, for developing insights into how such a story can be constructed using neural circuitry. The other is an unconscious ‘story’ about interactions with the world, perhaps better thought of as a series of different ‘models’ about how various actions relate to various consequences. In many ways, the latter are the grist for the former.

In this sense, we are safely back to the two stories that are ideologically central in their manifestations as pronounced in psychotherapy, but perhaps with some added sophistication deriving from neuroscience/cognitive science. In particular, there is no reason to believe that one story is ‘better’ than the other in any definitive sense. They are different stories based on different styles of story telling, with one having advantages in certain sorts of situations (quick responses, large numbers of variables, more direct relation to immediate experiences of pain and pleasure) and the other in other sorts of situations (time for more deliberate responses, challenges amenable to handling using smaller numbers of variables, more coherent, more able to defer immediate gratification/judgment.

In the clinical/psychotherapeutic context, an important implication of the more neutral view of two story~tellers outlined above is that one ought not to over~value the conscious, nor to expect miracles of the process of making conscious what is unconscious. In the immediate context, the issue is if the unconscious is capable of ‘correcting prediction errors,’ then why appeal to the conscious to achieve this function? More generally, what is the function of that persistent aspect of psychotherapy that aspires to make the unconscious conscious? And why is it therapeutically effective when it is? Here, it is worth calling special attention to an aspect of Pally's argument that might otherwise get a bit lost in the details of her article: . . . the therapist encourages the wife consciously to stop and consider her assumption that her husband does not properly care about her, and effortfully to consider an alternative view and inhibit her impulse to reject him back. This, in turn, creates a new type of experience, one in which he is indeed more loving, such that she can develop new predictions.’

It is not, as Pally describes it, the simple act of making something conscious that is therapeutically effective. What is necessary is to decompose the story consciously (something that is made possible by its being a story with a small number of variables) and, even what is more important, to see if the story generates a new ‘type of experience’ that in turn causes the development of ‘new predictions.’ The latter, is an effect of the conscious on the unconscious, an alteration of the unconscious brought about by hearing, entertaining, and hence acting on a new story developed by the conscious. It is not ‘making things conscious’ that is therapeutically effective; it is the exchange of stories that encourages the creation of a new story in the unconscious.

For quite different reasons, Grey (1995) earlier made a suggestion not dissimilar to Pally's, proposing that consciousness was activated when an internal model detected a prediction failure, but acknowledged he could see no reason ‘why the brain should generate conscious experience of any kind at all.’ Seemingly, in spite of her title, there seems of nothing really to any detection of prediction errors, especially of what is important that Pally's story is the detection of mismatches between two stories. One unconscious and the other conscious, and the resulting opportunity for both to shape a less trouble~making new story. That, briefly may be why the brain ‘should generate conscious experience,’ to reap the benefits of having a second story teller with a different style. Paraphrasing Descartes, one might say ‘I am, and I can think, therefore I can change who I am.’ It is not only the neurobiological ‘conscious’ that can undergo change; it is the neurobiological ‘unconscious’ as well.

More generally, I want to suggest that the most effective psychotherapy requires the recognitions, rapidly emanating from the neuro~ sciences and their cognitive counterpart for which are exposed of each within the paradigms of science, that the brain/mind has evolved with two (or more) independent story tellers and has done so precisely because there are advantages to having independent story tellers that generate and exchange different stories. The advantage is that each can learn from the other, and the mechanisms to convey the stories and forth and for each story teller to learn from the stories of the others occurring as a part of our evolutionary endowment as well. The problems that bring patients into a therapist's office are problems in the breakdown of story exchange, for any of a variety of reasons, and the challenge for the therapist is to reinstate the confidence of each story teller in the value of the stories created by the other. Neither the conscious nor the unconscious is primary; they function best as an interdependent loop with each developing its own story facilitated by the semi~independent story of the other. In such an organization, there are not only no ‘real,’ and no primacy for consciousness, there is only the ongoing development and, ideally, effective sharing of different stories.

There are, in the story I am outlining, implications for neuroscience/cognitive science as well. The obvious key questions are what does one mean (in terms of neurons and neuronal assemblies) by ‘stories,’ and in what ways are their construction and representation different in unconscious and conscious neural processing. But even more important, if the story I have outlined makes sense, what are the neural mechanisms by which unconscious and conscious stories are exchanged and by which each kind of story impacts on the other? And why (again in neural terms) does the exchange sometimes break down and fail in a way that requires a psychotherapist~an additional story teller~to be repaired?

Just as the unconscious and the conscious are engaged in a process of evolving stories for separate reasons and using separate styles, so too have been and will continue to be neuroscience/cognitive science and psychotherapy. And it is valuable that both communities continue to do so. But there is every reason to believe that the different stories are indeed about the same thing, not only because of isomorphisms between the differing stories but equally because the stories of each can, if listened to, are demonstrably of value to the stories of the other. When breakdowns in story sharing occur, they require people in each community who are daring enough to listen and be affected by the stories of the other community. Pally has done us all a service as such a person. I hope to further the constructs that bridge her to lay, and that others will feel inclined to join in an act of collectivity such that has enormous intellectual potential and relates directly too more seriously psychological need in the mental health arena. Indeed, there are reasons to believe that an enhanced skill at hearing, respecting, and learning from differing stories about similar things would be useful in a wide array of contexts.

The physical basis of consciousness appears to be the major and most

singular challenge to the scientific, reductionist world view. In the closing years of the second millennium, advances in the ability to record the activity of individual neurons in the brains of monkeys or other animals while they carry out particular tasks, combined with the explosive development of functional brain imaging in normal humans, has lead to a renewed empirical program to discover the scientific explanation of consciousness. This article reviews some of the relevant experimental work and argues that the most advantageous strategy for now is to focus on discovering the neuronal correlates of consciousness.

Consciousness is a puzzling state~dependent property of certain types of complex, adaptive systems. The best example of one type of such systems is a healthy and attentive human brain. If the brain is anaesthetized, consciousness ceases. Small lesions in the midbrain and thalamus of patients can lead to a complete loss of consciousness, while destruction of circumscribed parts of the cerebral cortex of patients can eliminate very specific aspects of consciousness, such as the ability to be aware of motion or to recognize objects as faces, without a concomitant loss of vision usually. Given the similarity in brain structure and behaviour, biologists commonly assume that at least some animals, in particular non~human primates, share certain aspects of consciousness with humans. Brain scientists, in conjunction with cognitive neuroscientists, are exploiting a number of empirical approaches that shed light on the neural basis of consciousness. Since it is not known to what extent, artificial systems, such as computers and robots, can become conscious, this article will exclude these from consideration.

Largely, neuroscientists have made a number of working assumptions that, in the fullness of time, need to be justified more fully.

(1) There is something to be explained; that is, the subjective content associated with a conscious sensation~what philosophers point to the qualia~does exist and has its physical basis in the brain. To what extent qualia and all other subjective aspects of consciousness can or cannot be explained within some reductionist framework remains highly controversially.

(2) Consciousness is a vague term with many usages and will, in the fullness of time, be replaced by a vocabulary that more accurately reflect the contribution of different brain processes (for a similar evolution, consider the usage of memory, that has been replaced by an entire hierarchy of more specific concepts). Common to all forms of consciousness is that it feels like something (e.g., to ‘see blue,’ to ‘experience a headache,’ or to ‘reflect upon a memory’). Self~consciousness is but one form of consciousness.

It is possible that all the different aspects of consciousness (smelling, pain, visual awareness, effect, self~consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then one will have gone most of the way toward understanding them all.

(3) Consciousness is a property of the human brain, a highly evolved system. It therefore must have a useful function to perform. Crick and Koch (1998) assumes that the function of the neuronal correlate of consciousness is to produce the best current interpretation of the environment~in the light of past experiences~and to make it available, for a sufficient time, to the parts of the brain that contemplate, plan and execute voluntary motor outputs (including language). This needs to be contrasted with the on~line systems that bypass consciousness but that can generate stereotyped behaviours.

Note that in normally developed individuals motor output is not necessary for consciousness to occur. This is demonstrated by lock~in syndrome in which patients have lost (nearly) all ability to move yet are clearly conscious.

(4) At least some animal species posses some aspects of consciousness. In particular, this is assumed to be true for non~human primates, such as the macaque monkey. Consciousness associated with sensory events in humans is likely to be related to sensory consciousness in monkeys for several reasons. Firstly, trained monkeys show similar behaviour to that of humans for many low~level perceptual tasks (e.g., detection and discrimination of visual motion or depth. Secondly, the gross neuroanatomy of humans and non~human primates are rather similar once the difference in size has been accounted for. Finally, functional magnetic resonance imaging of human cerebral cortex is confirming the existence of a functional organization in sensory cortical areas similar to that discovered by the use of single cell electrophysiology in the monkey. As a corollary, it follows that language is not necessary for consciousness to occur (although it greatly enriches human consciousness).

It is important to distinguish the general, enabling factors in the brain that are needed for any form of consciousness to occur from modulating ones that can up~or~down regulate the level of arousal, attention and awareness and from the specific factors responsible for a particular content of consciousness.

An easy example of an enabling factor would be a proper blood supply. Inactivate the heart and consciousness ceases within a fraction of a minute. This does not imply that the neural correlate of consciousness is in the heart (as Aristotle thought). A neuronal enabling factor for consciousness is the intralaminar nuclei of the thalamus. Acute bilateral loss of function in these small structures that are widely and reciprocally connected to the basal ganglia and cerebral cortex leads to an immediate coma or profound disruption in arousal and consciousness.

Among the neuronal modulating factors are the various activities in nuclei in the brain stem and the midbrain, often collectively referred to as the reticular activating system, that control in a widespread and quite specific manner the level of noradrenaline, serotonin and acetylcholine in the thalamus and forebrain. Appropriate levels of these neurotransmitters are needed for sleep, arousal, attention, memory and other functions critical to behaviour and consciousness.

Yet any particular content of consciousness is unlikely to arise from these structures, since they probably lack the specificity necessary to mediate a sharp pain in the right molar, the percept of the deep, blue California sky, the bouquet associated with a rich Bordeaux, a haunting musical melody and so on. These must be caused by specific neural activity in cortex, thalamus, basal ganglia and associated neuronal structures. The question motivating much of the current research into the neuronal basis of consciousness is the notion of the minimal neural activity that is sufficient to cause a specific conscious percept or memory.

For instance, when a subject consciously perceives a face, the retinal ganglion cells whose axons make up the optic nerve that carries the visual information to the brain proper are firing in response to the visual stimulus. Yet it is unlikely that this retinal activity directly correlates with visual perception. While such activity is evidently necessary for seeing a physical stimulus in the world, retinal neurons by themselves do not give rise to consciousness.

Given the comparative ease with which the brains of animals can be probed and manipulated, it seems opportune at this point in time to concentrate on the neural basis of sensory consciousness. Because primates are highly visual animals and much is known about the neuroanatomy, psychology and computational principles underling visual perception, visions has proven to be the most popular model systems in the brain sciences.

Cognitive and clinical research demonstrates that much complex information processing can occur without involving consciousness. This includes visual, auditory and linguistic priming, implicit memory, the implicit recognition of complex sequences, automatic behaviours such as driving a car or riding a bicycle and so on (Velmans 1991). The dissociations found in patients with lesions in the cerebral cortex (e.g., such as residual visual functions in the professed absence of any visual awareness known as clinical blind~sight in patients with lesions in preliminary visual cortex.

It can be said, that if one is without idea, then one is without concept, and as well, if one is without concept one is without an idea. An idea (Gk., eidos, visible form) be it a notion stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow thought of as representing something about the world, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept o the number series or of justice, for example, thought of as independent objected of enquiry and perhaps of knowledge. These two poles are not distinct meanings of the therm, although they give rise to many problems of interpretation, but between tem they define a space of philosophical problems. On the other hand, ideas are that with which er think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way they seem to be inherently transient, fleeting, and unstable private presences. On the other hand, ideas provide the way in which objective knowledge can be expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood. Plato’s theory of Forms is a celebration of objective and timeless existence of ideas as concepts, and in his hands ideas are reified to the point where they make up the only rea world, of separate and perfect models of which the empirical world is only a poor cousin. This doctrine, notable in the Timaeus, opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other~worldly aspect until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.

The philosophical doctrine that reality is somehow mind~correlatives or mind co~ordinated~that the real objects comprising the ‘external world’ are mot independent of cognizing minds, but only exist as in some way correlative to the mental operations. The doctrine centres on the conception that reality as we understand it reflects the working of mind. And it construes this as meaning that the inquiring mind itself to make a formative contribution not merely to our understanding character we attribute to it.

The cognitive scientist Jackendoff (1987) argues at length against the notion that consciousness and thoughts are inseparable and that introspection can reveal the contents of the mind. What is conscious about thoughts, are sensory aspects, such as visual images, sounds or silent speech? Both the process of thought and its content are not directly accessible to consciousness. Indeed, one tradition in psychology and psychoanalysis~going back to Sigmund Freud~hypothesizes that higher~level decision making and creativity are not accessible at a conscious level, although they influence behaviour.

Within the visual modality, Milner and Goodale (1995) have made a masterful case for the existence of so~called on~line systems that by~pass consciousness. Their function is to mediate relative stereotype visuo~motor behaviours, such as eye and arm movements, reaching, grasping, and postural adjustment and so on. In a very rapid, reflex~like manner. On~line systems work in egocentric coordinate systems, and lack certain types of perceptual illusions (e.g., size illusion) as well as direct access to working memory. These contrasts are well within the function of consciousness as alluded to from above, namely to synthesize information from many different sources and use it to plan behavioural patterns over time. Milner and Goodale argue that on~line systems are associated with the dorsal stream of visual information in the cerebral cortex, originating in the primary visual cortex and terminating in the posterior parietal cortex. The problem of consciousness can be broken down into several separate questions. Most, if not all of these, can then be subjected to scientific inquiry.

The major question that neuroscience must ultimately answer can be bluntly stated as follows: It is probable that at any moment some active neuronal processes in our head correlates with consciousness, while others do not; what is the difference between them? The specific processes that correlate with the current content of consciousness are referred to as the neuronal correlate of consciousness, or as the NCC. Whenever some information is represented in the NCC, it is represented in consciousness. The NCC is the minimal (minimal, since it is known that the entire brain is sufficient to give rise to consciousness) set of neurons, most likely distributed throughout certain cortical and subcortical areas, whose firing directly correlates with the perception of the subject at the time. Conversely, stimulating these neurons in the right manner with some yet unheard of technology should give rise to the same perception as before.

Discovering the NCC and its properties will mark a major milestone in any scientific theory of consciousness.

What is the character of the NCC? Most popular has been the belief that consciousness arises as an emergent property of a very large collection of interacting neurons (for instance, Libet 1993). In this view, it would be foolish to locate consciousness at the level of individual neurons. An alternative hypothesis is that there are special sets of ‘consciousness’ neurons distributed throughout cortex and associated systems. Such neurons represent the ultimate neuronal correlate of consciousness, in the sense that the relevant activity of an appropriate subset of them is both necessary and sufficient to give rise to an appropriate conscious experience or percept (Crick and Koch 1998). Generating the appropriate activity in these neurons, for instance by suitable electrical stimulation during open skull surgery, would give rise to the specific percept.

Any~one subtype of NCC neurons would, most likely, be characterized by a unique combination of molecular, biophysical, pharmacological and anatomical traits. It is possible, of course, that all cortical neurons may be capable of participating in the representation of one percept or another, though not necessarily doing so for all percepts. The secret of consciousness would then be the type of activity of a temporary subset of them, consisting of all those cortical neurons that represent that particular percept at that moment. How activity of neurons across a multitude of brain areas that encode all of the different aspects associated with an object (e.g., the colour of the face, its facial expression, its gender and identity, the sound issuing from its mouth) is combined into some single percept remains puzzling and is known as the binding problem.

What, if anything, can we infer about the location of neurons whose activity correlates with consciousness? In the case of visual consciousness, it was surmised that these neurons must have access to visual information and project to the planning stages of the brain; That is to premotor and frontal areas. Since no neurons in the primary visual cortex of the macaque monkey project to any area forward of the central sulcus, Crick and Koch (1998) propose that neurons in V1 do not give rise to consciousness (although it is necessary for most forms of vision, just as the retina is). Ongoing electro physiological, psycho physical and imaging research in monkeys and humans is evaluating this prediction.

While the set of neurons that can express anyone particular conscious percept might constitute a relative small fraction of all neurons in anyone area, many more neurons might be necessary to support the firing activity leading up to the NCC. This might resolve the apparent paradox between clinical lessoning data suggesting that small and discrete lesions in the cortex can lead to very specific deficits (such as the inability to see colours or to recognize faces in the absence of other visual losses) and the functional imaging data that anyone visual stimulus can activate large swaths of cortex.

Conceptually, several other questions need to be answered about the NCC. What type of activity corresponds to the NCC (it has been proposed as long ago as the early part of the twentieth century that spiking activity synchronized across a population of neurons is a necessary condition for consciousness to occur)? What causes the NCC to occur? And, finally, what effect does the NCC have on postsynaptic structures, including motor output.

A promising experimental approach to locate the NCC is the use of bistable percepts in which a constant retinal stimulus gives rise to two percepts alternating in time, as in a Necker cube (Logothetis 1998). One version of this is binocular rivalry in which small images, say of a horizontal grating, are presented to the left eye and another image, say the vertical grating is shown to the corresponding location in the right eye. In spite of the constant visual stimulus, observers ‘see’ the horizontal grating alternately every few seconds with the vertical one (Blake 1989). The brain does not allow for the simultaneous perception of both images.

It is possible, though difficult, to train a macaque monkey to report whether it is currently seeing the left or the right image. The distribution of the switching times and the way in which changing the contrast in one eye affects these leaves little to doubt, in that monkeys and humans experience the same basic phenomenon. In a series of elegant experiments, Logothetis and colleagues (Logothetis 1998) recorded from a variety of visual cortical areas in the awake macaque monkey while the animal performed a binocular rivalry task. In undeveloped visual cortices, only a small fraction of cells modulates their response as a function of the percept of the monkey, while 20 to 30% of neurons in higher visual areas in the cortex do so. The majority of cells increased their firing rate in response to one or the other retinal stimulus with little regard to what the animal perceives at the time. In contrast, in a high~level cortical area such as the inferior temporal cortex, almost all neurons responded only to the perceptual dominant stimulus (in other words, a ‘face’ cell only fired when the animal indicated by its performance that it saw the face and not the pattern presented to the other eye). This makes it likely that the NCC involves activity in neurons in the inferior temporal lobe. Lesions in the homologous area in the human brain are known to cause very specific deficits in the conscious face or object recognition. However, it is possible that specific interactions between IT cells and neurons in parts of the prefrontal cortex are necessary in order for the NCC to be generated

Functional brain imaging in humans undergoing binocular rivalry has revealed that areas in the right prefrontal cortex are in activating during the perceptual switch from one percept to the other.

A number of alternate experimental paradigms are being investigated using electro physiological recordings of individual neurons in behaving animals and human patients, combined with functional brain imaging. Common to these is the manipulation of the complex and changing relationship between physical stimulus and the conscious percept. For instance, when subjects are forced rapidly to respond to a low saliency target, both monkeys and human’s sometimes claim to perceive such a target in the absence of any physical target consciously (false alarm) or fail to respond to a target (miss). The NCC in the appropriate sensory area should mirror the perceptual report under these dissociated conditions. Visual illusions constitute another rich source of experiments that can provide information concerning the neurons underlying these illusory percepts. A classical example is the motion affected in which a subject stares at a constantly moving stimulus (such as a waterfall) for a fraction of a minute or longer. Immediately after this conditioning period, a stationary stimulus will appear to move in the opposite direction. Because of the conscious experience of motion, one would expect, the subject’s cortical motion areas to be activated in the absence of any moving stimulus.

Future techniques, most likely based on the molecular identification and manipulation of discrete and identifiable subpopulations of cortical cells in appropriate animals, will greatly help in this endeavour

Identifying the type of activity and the type of neurons that gives rise to specific conscious percept in animals and humans would only be the first, even if critical, step in understanding consciousness. One also needs to know where these cells project to, their postsynaptic action, how they develop in early childhood, what happens to them in mental diseases known to affect consciousness in patients, such as schizophrenia or autism, and so on. And, of course, a final theory of consciousness would have to explain the central mystery, why a physical system with particular architectures gives rise to feelings and qualia.

The central structure of an experience is its intentionality, its being directed toward something, as it is an experience of or about some object. An experience is directed toward an object by virtue of its content or meaning (which represents the object) together with appropriate enabling conditions.

Phenomenology as a discipline is distinct from but related to other key disciplines in philosophy, such as ontology, epistemology, logic, and ethics. Phenomenology has been practised in various guises for centuries, however, its maturing qualities have begun in the early parts of the 20th century. The works that have dramatically empathized the growths of phenomenology are accredited through the works of Husserl, Heidegger, Sartre, Merleau~Ponty and others. Phenomenological issues of intentionality, consciousness, qualia, and first~person perspective have been prominent in recent philosophy of mind.

Phenomenology is commonly understood in either of two ways: as a disciplinary field in philosophy, or as a movement in the history of philosophy.

The discipline of phenomenology may be defined initially as the study of structures of experience, or consciousness. Literally, phenomenology is the study of ‘phenomena’: Appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meaning’s things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: Ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau~Ponty, Jean~Paul Sartre. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy~as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definitions of Phenomenological offered above will thus be debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)

In recent philosophy of mind, the term ‘phenomenology’ is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: What it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the Phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our ‘life~world.’

Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro~Anglo~American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in contemporary views, while also highlighting the historical tradition that brought the discipline into its own.

Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called ‘intentionality,’ that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experiences abide toward the direction that represents or ‘intends’ of things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.

The basic intentional structure of consciousness, we come to find in reflection or analysis, in that of which involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or ‘horizonal’ awareness), awareness of one's own experience (self~consciousness, in one sense), self~awareness (awareness~of~oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), determination or intention represents its desire for action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life~world (in a particular culture).

Furthermore, in a different dimension, we find various grounds or enabling conditions~conditions of the possibility~of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focussed on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focussed especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality is grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self~understanding than do the electrochemical workings of our brain, much less our dependence on quantum~mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.

The discipline of phenomenology is defined by its domain of study, its methods, and its main results. Phenomenology studies structures of conscious experience as experienced from the first~person point of view, along with relevant conditions of experience. The central structure of an experience is its intentionality, the way it is directed through its content or meaning toward a certain object in the world.

We all experience various types of experience including perception, imagination, thought, emotion, desire, volition, and action. Thus, the domain of phenomenology is the range of experiences including these types (among others). Experience includes not only relatively passive experience as in vision or hearing, but also active experience as in walking or hammering a nail or kicking a ball. (The range will be specific to each species of being that enjoys consciousness; Our focus is on our own human experience. Not all conscious beings will, or will be able to, practice phenomenology, as we do.)

Conscious experiences have a unique feature: we experience them, we live through them or perform them. Other things in the world we may observe and engage. But we do not experience them, in the sense of living through or performing them. This experiential or first~person feature~that of being experienced~is an essential part of the nature or structure of conscious experience: as we say, ‘I see/think/desire/do . . .’ This feature is both a Phenomenological and an ontological feature of each experience: it is part of what it is for the experience to be experienced (Phenomenological) and part of what it is for the experience to be (ontological).

How shall we study conscious experience? We reflect on various types of experiences just as we experience them. That is to say, we proceed from the first~person point of view. However, we do not normally characterize an experience at the time we are performing it. In many cases we do not have that capability: a state of intense anger or fear, for example, consumes the entire focus at the time. Rather, we acquire a background of having lived through a given type of experience, and we look to our familiarity with that type of experience: hearing a song, seeing a sunset, thinking about love, intending to jump a hurdle. The practice of phenomenology assumes such familiarity with the type of experiences to be characterized. Importantly, also, it is types of experience that phenomenology pursues, rather than a particular fleeting experience~unless its type is what interests us.

Classical phenomenologists practised some three distinguishable methods. (1) We describe a type of experience just as we find it in our own (past) experience. Thus, Husserl and Merleau~Ponty spoke of pure description of lived experience. (2) We interpret a type of experience by relating it to relevant features of context. In this vein, Heidegger and his followers spoke of hermeneutics, the art of interpretation in context, especially social and linguistic context. (3) We analyse the form of a type of experience. In the end, all the classical phenomenologists practised analysis of experience, factoring out notable features for further elaboration.

These traditional methods have been ramified in recent decades, expanding the methods available to phenomenology. Thus: (4) In a logico~semantic model of phenomenology, we specify the truth conditions for a type of thinking (say, where I think that dogs chase cats) or the satisfaction conditions for a type of intention (say, where I intend or will to jump that hurdle). (5) In the experimental paradigm of cognitive neuroscience, we design empirical experiments that tend to confirm or refute aspects of experience (say, where a brain scan shows electrochemical activity in a specific region of the brain thought to subserve a type of vision or emotion or motor control). This style of ‘neurophenomenology’ assumes that conscious experience is grounded in neural activity in embodied action in appropriate surroundings~mixing pure phenomenology with biological and physical science in a way that was not wholly congenial to traditional phenomenologists.

What makes an experience conscious is a certain awareness one has of the experience while living through or performing it. This form of inner awareness has been a topic of considerable debate, centuries after the issue arose with Locke's notion of self~consciousness on the heels of Descartes' sense of consciousness (conscience, co~knowledge). Does this awareness~of~experience consist in a kind of inner observation of the experience, as if one were doing two things at once? (Brentano argued no.) Is it a higher~order perception of one's mind's operation, or is it a higher~order thought about one's mental activity? (Recent theorists have proposed both.) Or is it a different form of inherent structure? (Sartre took this line, drawing on Brentano and Husserl.) These issues are beyond the scope of this article, but notice that these results of Phenomenological analysis, that shape the characterlogical domain of study and the methodology appropriate to the domain. For awareness~of~experience is a defining trait of conscious experience, the trait that gives experience a first~person, lived character. It is that lived character of experience that allows a first~person perspective on the object of study, namely, experiences, and that perspective is characteristic of the methodology of phenomenology.

Conscious experience is the starting point of phenomenology, but experience shades off into fewer overtly conscious phenomena. As Husserl and others stressed, we are only vaguely aware of things in the margin or periphery of attention, and we are only implicitly aware of the wider horizon of things in the world around us. Moreover, as Heidegger stressed, in practical activities like walking along, or hammering a nail, or speaking our native tongue, we are not explicitly conscious of our habitual patterns of action. Furthermore, as psychoanalysts have stressed, much of our intentional mental activity is not conscious at all, but may become conscious in the process of therapy or interrogation, as we come to realize how we feel or think about something. We should allow, then, that the domain of phenomenology~our own experience~spreads out from conscious experience into semi~conscious and even unconscious mental activity, along with relevant background conditions implicitly invoked in our experience. (These issues are subject to debate; the point here is to open the door to the question of where to draw the boundary of the domain of phenomenology.)

To begin an elementary exercise in phenomenology, consider some typical experiences one might have in everyday life, characterized in the first person: (1) I see that fishing boat off the coast as dusk descends over the Pacific. (2) I hear that helicopter whirring overhead as it approaches the hospital. (3) I am thinking that phenomenology differs from psychology. (4) I wish that warm rain from Mexico were falling like last week. (5) I imagine a fearsome creature like that in my nightmare. (6) I intend to finish my writing by noon. (7) I walk carefully around the broken glass on the sidewalk. (8) I stroke a backhand cross~court with that certain underspin. (9) I am searching for the words to make my point in conversation.

Here are rudimentary characterizations of some familiar types of experience. Each sentence is a simple form of Phenomenological description, articulating in everyday English the structure of the type of experience so described. The subject term ‘I’ indicate the first~person structure of the experience: The intentionality proceeds from the subject. The verb indicates the type of intentional activity describing recognition, thought, imagination, etc. Of central importance is the way that objects of awareness are presented or intended in our experiences, especially, the way we see or conceive or think about objects. The direct~object expression (‘that fishing boat off the coast’) articulates the mode of presentation of the object in the experience: the content or meaning of the experience, the core of what Husserl called noema. In effect, the object~phrase expresses the noema of the act described, that is, to the extent that language has appropriate expressive power. The overall form of the given sentence articulates the basic form of intentionality in the experience: Subject~act~content~object.

Rich Phenomenological description or interpretation, as in Husserl, Merleau~Ponty et al., will far outrun such simple Phenomenological descriptions as above. But such simple descriptions bring out the basic form of intentionality. As we interpret the Phenomenological description further, we may assess the relevance of the context of experience. And we may turn to wider conditions of the possibility of that type of experience. In this way, in the practice of phenomenology, we classify, describe, interpret, and analyse structures of experiences in ways that answer to our own experience.

In such interpretive~descriptive analyses of experience, we immediately observe that we are analysing familiar forms of consciousness, conscious experience of or about this or that. Intentionality is thus the salient structure of our experience, and much of the phenomenology proceeds as the study of different aspects of intentionality. Thus, we explore structures of the stream of consciousness, the enduring self, the embodied self, and bodily action. Furthermore, as we reflect on how these phenomena work, we turn to the analysis of relevant conditions that enable our experiences to occur as they do, and to represent or intend as they do. Phenomenology then leads into analyses of conditions of the possibility of intentionality, conditions involving motor skills and habits, backgrounding social practices, and often language, with its special place in human affairs, presents the following definition: ‘Phoneme, . . .’

The Oxford English Dictionary indicated of its knowledge, where science as itself is a contained source of phenomena as distinct from being (ontology). That division of any science that describes and classifies its phenomena. From the Greek phainomenon, appearance. In philosophy, the term is used in the first sense, amid debates of theory and methodology. In physics and philosophy of science, the term is used in the second sense, but only occasionally.

So its root meaning, then, phenomenology is the study of phenomena: Literally, appearances as opposed to reality. This ancient distinction launched philosophy as we emerged from Plato's cave. Yet the discipline of phenomenology did not blossom until the 20th century and remains poorly understood in many circles of contemporary philosophy. What is that discipline? How did philosophy move from a root concept of phenomena to the discipline of phenomenology?

Originally, in the 18th century, ‘phenomenology’ meant the theory of appearances fundamental to empirical knowledge, especially sensory appearances. The term seems to have been introduced by Johann Heinrich Lambert, a follower of Christian Wolff. Subsequently, Immanuel Kant’s used the term occasionally in various writings, as did Johann Gottlieb Fichte and G. W. F. Hegel. By 1889 Franz Brentano used the term to characterize what he called ‘descriptive psychology. From there Edmund Husserl took up the term for his new science of consciousness, and the rest is history.

Suppose we say phenomenology study’s phenomena: Of what appears to us~and its appearing. How shall we understand phenomena? The term has a rich history in recent centuries, in which we can see traces of the emerging discipline of phenomenology.

In a strict empiricist vein, what appears before the mind accedes of sensory data or qualia: either patterns of one's own sensations (seeing red here now, feeling this ticklish feeling, hearing that resonant bass tone) or sensible patterns of worldly things, say, the looks and smells of flowers (what John Locke called secondary qualities of things). In a strict rationalist vein, by contrast, what appears before the mind of ideas, rationally formed ‘clear and distinct ideas’ (in René Descartes' ideal). In Immanuel Kant’s's theory of knowledge, fusing rationalist and empiricist aims, what appears to the mind are phenomena defined as things~as~they~appear or things~as~they~are~represented (in a synthesis of sensory and conceptual forms of objects~as~known). In Auguste Comte's theory of science, phenomena (phenomenes) are the facts (faits, what occurs) that a given science would explain.

In 18th and 19th century epistemology, then, phenomena are the starting points in building knowledge, especially science. Accordingly, in a familiar and still current sense, phenomena are whatever we observe (perceive) and seek to explain. Discipline of psychology emerged late in the 19th century, however, phenomena took on a somewhat different guise. In Franz Brentano's Psychology from an Empirical Standpoint (1874), phenomena are of what is to occur in the mind: Mental phenomena are acts of consciousness (or their contents), and physical phenomena are objects of external perception starting with colours and shapes. For Brentano, physical phenomena exist ‘intentionally’ in acts of consciousness. This view revives a Medieval notion Brentano called ‘intentional in~existence. Nevertheless, the ontology remains undeveloped (what is it to exist in the mind, and do physical objects exist only in the mind?). More generally, we might say that phenomena are whatever we are conscious of: objects and events around us, other people, ourselves, even (in reflection) our own conscious experiences, as we experience these. In a certain technical sense, phenomena are things as they are given to our consciousness, whether in perception or imagination or thought or volition. This conception of phenomena would soon inform the new discipline of phenomenology.

Brentano distinguished descriptive psychology from genetic psychology. Where genetic psychology seeks the causes of various types of mental phenomena, descriptive psychology defines and classifies the various types of mental phenomena, including perception, judgment, emotion, etc. According to Brentano, every mental phenomenon, or act of consciousness, is directed toward some object, and only mental phenomena are so directed. This thesis of intentional directedness was the hallmark of Brentano's descriptive psychology. In 1889 Brentano used the term ‘phenomenology’ for descriptive psychology, and the way was paved for Husserl's new science of phenomenology.

Phenomenology as we know it was launched by Edmund Husserl in his Logical Investigations (1900~01). Two importantly different lines of theory came together in that monumental work: Psychological theory, on the heels of Franz Brentano (and William James, whose Principles of Psychology appeared in 1891 and greatly impressed Husserl); Its logically semantic theory, are the heels of Bernard Bolzano and Husserl's contemporaries who founded modern logic, including Gottlob Frege. (Interestingly, both lines of research trace back to Aristotle, and both reached importantly new results in Husserl's day.)

Husserl's Logical Investigations was inspired by Bolzano's ideal of logic, while taking up Brentano's conception of descriptive psychology. In his Theory of Science (1835) Bolzano distinguished between subjective and objective ideas or representations (Vorstellungen). In effect Bolzano criticized Kant’s and before him the classical empiricists and rationalists for failing to make this sort of distinction, thereby rendering phenomena merely subjective. Logic studies objective ideas, including propositions, which in turn make up objective theories as in the sciences. Psychology would, by contrast, study subjective ideas, the concrete contents (occurrences) of mental activities in particular minds at a given time. Husserl was after both, within a single discipline. So phenomena must be reconceived as objective intentional contents (sometimes called intentional objects) of subjective acts of consciousness. Phenomenology would then study this complex of consciousness and correlated phenomena. In Ideas I (Book One, 1913) Husserl introduced two Greek words to capture his version of the Bolzanoan distinction: noesis and noema (from the Greek verb noéaw, meaning to perceive, thinks, intend, from where the noun nous or mind). The intentional process of consciousness is called noesis, while its ideal content is called noema. The noema of an act of consciousness Husserl characterized both as an ideal meaning and as ‘the object as intended.’ Thus the phenomenon, or object~as~it~appears, becomes the noema, or object~as~it~is~intended. The interpretations of Husserl's theory of noema have been several and amount to different developments of Husserl's basic theory of intentionality. (Is the noema an aspect of the object intended, or rather a medium of intention?)

For Husserl, then, phenomenology integrates a kind of psychology with a kind of logic. It develops a descriptive or analytic psychology in that it describes and Analysed types of subjective mental activity or experience, in short, act of consciousness. Yet it develops a kind of logic~a theory of meaning (today we say logical semantics)~in that it describes and Analysed objective contents of consciousness: Ideas, concepts, images, propositions, in short, ideal meanings of various types that serve as intentional contents, or noematic meanings, of various types of experience. These contents are shareable by different acts of consciousness, and in that sense they are objective, ideal meanings. Following Bolzano (and to some extent the platonistic logician Hermann Lotze), Husserl opposed any reduction of logic or mathematics or science to mere psychology, to how the public happens to think, and in the same spirit he distinguished phenomenology from mere psychology. For Husserl, phenomenology would study consciousness without reducing the objective and shareable meanings that inhabit experience to merely subjective happenstances. Ideal meaning would be the engine of intentionality in acts of consciousness.

A clear conception of phenomenology awaited Husserl's development of a clear model of intentionality. Indeed, phenomenology and the modern concept of intentionality emerged hand~in~hand in Husserl's Logical Investigations (1900~01). With theoretical foundations laid in the Investigations, Husserl would then promote the radical new science of phenomenology in Ideas I (1913). And alternative visions of phenomenology would soon follow.

Phenomenology matured and was nurtured through the works of Husserl, much as epistemology came about by means of its own nutrition but through Descartes study, and ontology or metaphysics came into its own with Aristotle on the heels of Plato. Yet phenomenology has been practised, with or without the name, for many centuries. When Hindu and Buddhist philosophers reflected on states of consciousness achieved in a variety of meditative states, they were practising phenomenology. When Descartes, Hume, and Kant’s characterized states of perception, thought, and imagination, they were practising phenomenology. When Brentano classified varieties of mental phenomena (defined by the directedness of consciousness), he was practising phenomenology. When William James appraised kinds of mental activity in the stream of consciousness (including their embodiment and their dependence on habit), he too was practising phenomenology. And when recent analytic philosophers of mind have addressed issues of consciousness and intentionality, they have often been practising phenomenology. Still, the discipline of phenomenology, its roots tracing back through the centuries, came full to flower in Husserl.

Husserl's work was followed by a flurry of Phenomenological writing in the first half of the 20th century. The diversity of traditional phenomenology is apparent in the Encyclopaedic of Phenomenology (Kluwer Academic Publishers, 1997, Dordrecht and Boston), which features separate articles on some seven types of phenomenology. (1) Transcendental constitutive phenomenology studies how objects are constituted in pure or transcendental consciousness, setting aside questions of any relation to the natural world around us. (2) Naturalistic constitutive phenomenology studies how consciousness constitutes or takes things in the world of nature, assuming with the natural attitude that consciousness is part of nature. (3) Existential phenomenology studies concrete human existence, including our experience of free choice or action in concrete situations. (4) Generative historicist phenomenology studies how meaning, as found in our experience, is generated in historical processes of collective experience over time. (5) Genetic phenomenology studies the genesis of meanings of things within one's own stream of experience. (6) Hermeneutical phenomenology studies interpretive structures of experience, how we understand and engage things around us in our human world, including ourselves and others. (7) Realistic phenomenology studies the structure of consciousness and intentionality, assuming it occurs in a real world that is largely external to consciousness and not somehow brought into being by consciousness.

The most famous of the classical phenomenologists were Husserl, Heidegger, Sartre, and Merleau~Ponty. In these four thinkers we find different conceptions of phenomenology, different methods, and different results. A brief sketch of their differences will capture both a crucial period in the history of phenomenology and a sense of the diversity of the field of phenomenology.

In his Logical Investigations (1900~01) Husserl outlined a complex system of philosophy, moving from logic to philosophy of language, to ontology (theory of universals and parts of wholes), to a Phenomenological theory of intentionality, and finally to a Phenomenological theory of knowledge. Then in Ideas I (1913) he focussed squarely on phenomenology itself. Husserl defined phenomenology as ‘the science of the essence of consciousness,’ entered on the defining trait of intentionality, approached explicitly ‘in the first person.’ In this spirit, we may say phenomenology is the study of consciousness~that is, conscious experience of various types~as experienced from the first~person point of view. In this discipline we study different forms of experience just as we experience them, from the perspective of the subject living through or performing them. Thus, we characterize experiences of seeing, hearing, imagining, thinking, feeling (i.e., emotion), wishing, desiring, willing, and acting, that is, embodied volitional activities of walking, talking, cooking, carpentering, etc. However, not just any characterization of an experience will do. Phenomenological analysis of a given type of experience will feature the ways in which we ourselves would experience that form of conscious activity. And the leading property of our familiar types of experience is their intentionality, their being a consciousness of or about something, something experienced or presented or engaged in a certain way. How I see or conceptualize or understand the object I am dealing with defines the meaning of that object in my current experience. Thus, phenomenology features a study of meaning, in a wide sense that includes more than what is expressed in language.

In Ideas I Husserl presented phenomenology with a transcendental turn. In part this means that Husserl took on the Kant’sian idiom of ‘transcendental idealism,’ looking for conditions of the possibility of knowledge, or of consciousness generally, and arguably turning away from any reality beyond phenomena. But Husserl's transcendental, turn also involved his discovery of the method of epoché (from the Greek skeptics' notion of abstaining from belief). We are to practice phenomenology, Husserl proposed, by ‘bracketing’ the question of the existence of the natural world around us. We thereby turn our attention, in reflection, to the structure of our own conscious experience. Our first key result is the observation that each act of consciousness is a consciousness of something, that is, intentional, or directed toward something. Consider my visual experience wherein I see a tree across the square. In Phenomenological reflection, we need not concern ourselves with whether the tree exists: my experience is of a tree whether or not such a tree exists. However, we do need to concern ourselves with how the object is meant or intended. I see a Eucalyptus tree, not a Yucca tree; I see that object as a referentially exposed Eucalyptus tree, with certain shape and with bark stripping off, etc. Thus, bracketing the tree itself, we turn our attention to my experience of the tree, and specifically to the content or meaning in my experience. This tree~as~perceived Husserl calls the noema or noematic sense of the experience.

Philosophers succeeding Husserl debated the proper characterization of phenomenology, arguing over its results and its methods. Adolf Reinach, an early student of Husserl's (who died in World War I), argued that phenomenology should remain merged with a total inference by some realistic ontologism, as in Husserl's Logical Investigations. Roman Ingarden, a Polish phenomenologist of the next generation, continued the resistance to Husserl's turn to transcendental idealism. For such philosophers, phenomenology should not bracket questions of being or ontology, as the method of epoché would suggest. And they were not alone. Martin Heidegger studied Husserl's early writings, worked as Assistant to Husserl in 1916, and in 1928, succeeded Husserl in the prestigious chair at the University of Freiburg. Heidegger had his own ideas about phenomenology.

In Being and Time (1927) Heidegger unfurled his rendition of phenomenology. For Heidegger, we and our activities are always ‘in the world,’ our being is being~in~the~world, so we do not study our activities by bracketing the world, rather we interpret our activities and the meaning things have for us by looking to our contextual relations to things in the world. Indeed, for Heidegger, phenomenology resolves into what he called ‘fundamental ontology.’ We must distinguish beings from their being, and we begin our investigation of the meaning of being in our own case, examining our own existence in the activity of ‘Dasein’ (that being whose being is in each case my own). Heidegger resisted Husserl's neo~Cartesian emphasis on consciousness and subjectivity, including how perception presents things around us. By contrast, Heidegger held that our more basic ways of relating to things are in practical activities like hammering, where the phenomenology reveals our situation in a context of equipment and in being~with~others.

In Being and Time Heidegger approached phenomenology, in a quasi~poetic idiom, through the root meanings of ‘logos’ and ‘phenomena,’ so that phenomenology is defined as the art or practice of ‘letting things show themselves.’ In Heidegger's inimitable linguistic play on the Greek roots, ‘phenomenology’ means . . .~to let that which shows itself to be seen from itself in the very way in which it shows itself from itself. Here Heidegger explicitly parodies Husserl's call, ‘To the things themselves,’ or ‘To the phenomena themselves!’ Heidegger went on to emphasize practical forms of comportment or better relating (Verhalten) as in hammering a nail, as opposed to representational forms of intentionality as in seeing or thinking about a hammer. Much, of Being and Time develops an existential interpretation of our modes of being including, famously, our being~toward~death.

In a very different style, in clear analytical prose, in the text of a lecture course called The Basic Problems of Phenomenology (1927), Heidegger traced the question of the meaning of being from Aristotle through many other thinkers into the issues of phenomenology. Our understanding of beings and their being comes ultimately through phenomenology. Here the connection with classical issues of ontology is more apparent, and consonant with Husserl's vision in the Logical Investigations (an early source of inspiration for Heidegger). One of Heidegger's most innovative ideas was his conception of the ‘ground’ of being, looking to modes of being more fundamental than the things around us (from trees to hammers). Heidegger questioned the contemporary concern with technology, and his writing might suggest that our scientific theories are historical artifacts that we use in technological practice, rather than systems of ideal truth (as Husserl had held). Our deep understanding of being, in our own case, comes rather from phenomenology, Heidegger held.

In the 1930s phenomenology migrated from Austrian and then German philosophy into French philosophy. The way had been paved in Marcel Proust's in Search of Lost Time, in which the narrator recounts in close detail his vivid recollections of experiences, including his famous associations with the smell of freshly baked madeleines. This sensibility to experience traces to Descartes' work, and French phenomenology has been an effort to preserve the central thrust of Descartes' insights while rejecting mind~body dualism. The experience of one's own body, or one's lived or living body, has been an important motif in many French philosophers of the 20th century

In the novel Nausea (1936) Jean~Paul Sartre described a bizarre course of experience in which the protagonist, writing in the first person, describes how ordinary objects lose their meaning until he encounters pure being at the foot of a chestnut tree, and in that moment recovers his sense of his own freedom. In Being and Nothingness (1943, written partly while a prisoner of war), Sartre developed his conception of Phenomenological ontology. Consciousness is a consciousness of objects, as Husserl had stressed. In Sartre's model of intentionality, the central player in consciousness is a phenomenon, and the occurrence of a phenomenon is just a consciousness~of~an~object. The chestnut tree I see is, for Sartre, such a phenomenon in my consciousness. Indeed, all things in the world, as we normally experience them, are phenomena, beneath or behind which lies their ‘being~in~itself.’ Consciousness, by contrast, has ‘being~for~itself,’ inasmuch as consciousness is not only a consciousness~of~its~object but also a pre~reflective consciousness~of~itself (conscience de soi). Yet for Sartre, unlike Husserl, that ‘I’ or self is nothing but a sequence of acts of consciousness, notably including radically free choices (like a Humean bundle of perceptions).

For Sartre, the practice of phenomenology proceeds by a deliberate reflection on the structure of consciousness. Sartre's method is in effect a literary style of interpretive description of different types of experience in relevant situations~a practice that does not really fit the methodological proposals of either Husserl or Heidegger, but makes use of Sartre's great literary skill. (Sartre wrote many plays and novels and was awarded the Nobel Prize in Literature.)

Sartre's phenomenology in Being and Nothingness became the philosophical foundation for his popular philosophy of existentialism, sketched in his famous lecture ‘Existentialism is a Humanism’ (1945). In Being and Nothingness Sartre emphasized the experience of freedom of choice, especially the project of choosing oneself, the defining pattern of one's past actions. Through vivid description of the ‘look’ of the Other, Sartre laid groundwork for the contemporary political significance of the concept of the Other (as in other groups or ethnicities). Indeed, in The Second Sex (1949) Simone de Beauvoir, Sartre's life~long companion, launched contemporary feminism with her nuance account of the perceived role of women as Other.

In 1940s Paris, Maurice Merleau~Ponty joined with Sartre and Beauvoir in developing phenomenology. In Phenomenology of Perception (1945) Merleau~Ponty developed a rich variety of phenomenology emphasizing the role of the body in human experience. Unlike Husserl, Heidegger, and Sartre, Merleau~Ponty looked to experimental psychology, analysing the reported experience of amputees who felt sensations in a phantom limb. Merleau~Ponty rejected both associationist psychology, focussed on correlations between sensation and stimulus, and intellectualist psychology, focussed on rational construction of the world in the mind. (Think of the behaviorist and computationalist models of mind in more recent decades of empirical psychology.) Instead, Merleau~Ponty focussed on the ‘body image,’ our experience of our own body and its significance in our activities. Extending Husserl's account of the lived body (as opposed to the physical body), Merleau~Ponty resisted the traditional Cartesian separation of mind and body. For the body image is neither in the mental realm nor in the mechanical~physical realm. Rather, my body is, as it were, me in my engaged action with things I perceive including other people.

The scope of Phenomenology of Perception is characteristic of the breadth of classical phenomenology, not least because Merleau~Ponty drew (with generosity) on Husserl, Heidegger, and Sartre while fashioning his own innovative vision of phenomenology. His phenomenology addressed the role of attention in the phenomenal field, the experience of the body, the spatiality of the body, the motility of the body, the body in sexual being and in speech, other selves, temporality, and the character of freedom so important in French existentialism. Near the end of a chapter on the Cogito (Descartes' ‘I think, therefore I am’), Merleau~Ponty succinctly captures his embodied, existential form of phenomenology, writing: Insofar as, when I reflect on the essence of subjectivity, I find it bound up with that of the body and that of the world, this is because my existence as subjectivity [= consciousness] is merely one with my existence as a body and with the existence of the world, and because the subject that I am, for when taken seriously, is inseparable from this body and this world. In short, consciousness is embodied (in the world), and equally body is infused with consciousness (with cognition of the world).

In the years since Hussserl, Heidegger, et al. wrote that phenomenologists have dug into all these classical issues, including intentionality, temporal awareness, intersubjectivity, practical intentionality, and the social and linguistic contexts of human activity. Interpretation of historical texts by Husserl et al. has played a prominent role in this work, both because the texts are rich and difficult and because the historical dimension is itself part of the practice of continental European philosophy. Since the 1960s, philosophers trained in the methods of analytic philosophy have also dug into the foundations of phenomenology, with an eye to 20th century work in philosophy of logic, language, and mind.

Phenomenology was already linked with logical and semantic theory in Husserl's Logical Investigations. Analytic phenomenology picks up on that connection. In particular, Dagfinn F¿llesdal and J. N. Mohanty have explored historical and conceptual relations between Husserl's phenomenology and Frége's logical semantics (in Frége's ‘On Sense and Reference,’ 1892). For Frege, an expression refers to an object by way of a sense: Thus, two expressions (say, ‘the morning star’ and ‘the evening star’) may refer to the same object (Venus) but express different senses with different manners of presentation. For Husserl, similarly, an experience (or an act of consciousness) intends or refers to an object by way of a noema or noematic sense: Thus, two experiences may refer to the same object but have different noematic senses involving different ways of presenting the object (for example, in seeing the same object from different sides). Indeed, for Husserl, the theory of intentionality is a generalization of the theory of linguistic reference: as linguistic reference is mediated by sense, so intentional reference is mediated by noematic sense.

More recently, analytic philosophers of mind have rediscovered phenomenologically issues of mental representation, intentionality, consciousness, sensory experience, intentional content, and context~of~thought. Some of these analytic philosophers of mind hark back to William James and Franz Brentano at the origins of modern psychology, and some look to empirical research in today's cognitive neuroscience. Some researchers have begun to combine Phenomenological issues with issues of neuroscience and behavioural studies and mathematical modelling. Such studies will extend the methods of traditional phenomenology as the Zeitgeist moves on. We address philosophy of mind below.

The discipline of phenomenology forms one basic field in philosophy among others. How is phenomenology distinguished from, and related to, other fields in philosophy?

Traditionally, philosophy includes at least four core fields or disciplines: Ontology, epistemology, ethics, logic. Suppose phenomenology joins that list. Consider then these elementary definitions of field: (1) Ontology is the study of beings or their being ~ what is. (2) Epistemology is the study of knowledge~how we know. (3) Logic is the study of valid reasoning~how to reason. (4) Ethics is the study of right and wrong~how we should act. (5) Phenomenology is the study of our experience~how we experience. The domains of study in these five fields are clearly different, and they seem to call for different methods of study.

Philosophers have sometimes argued that one of these fields is ‘first philosophy,’ the most fundamental discipline, on which all philosophy or all knowledge or wisdom rests. Historically (it may be argued), Socrates and Plato put ethics first, then Aristotle put metaphysics or ontology first, then Descartes put epistemology first, then Russell put logic first, and then Husserl (in his later transcendental phase) put phenomenology first.

Consider epistemology. As we saw, phenomenology helps to define the phenomena on which knowledge claims rest, according to modern epistemology. On the other hand, phenomenology itself claims to achieve knowledge about the nature of consciousness, a distinctive description of the first~person knowledge. Through a form of intuition, consider logic, as a logical theory of meaning led Husserl into the theory of intentionality, the heart of phenomenology. On one account, phenomenology explicates the intentional or semantic force of ideal meanings, and propositional meanings are central to logical theory. But logical structure is expressed in language, either ordinary language or symbolic languages like those of predicate logic or mathematics or computer systems. It remains an important issue of debate where and whether language shapes specific forms of experience (thought, perception, emotion) and their content or meaning. So there is an important (if disputed) relation between phenomenology and logico~linguistic theory, especially philosophical logic and philosophy of language (as opposed to mathematical logic per se)

Consider ontology. Phenomenology studies (among other things) the nature of consciousness, which is a central issue in metaphysics or ontology, and one that leads into the traditional mind~body problem. Husserlian methodology would bracket the question of the existence of the surrounding world, thereby separating phenomenology from the ontology of the world. Yet Husserl's phenomenology presupposes theory about species and individuals (universals and particulars), relations of part and whole, and ideal meanings~all parts of ontology

Now consider ethics: Phenomenology might play a role in ethics by offering analyses of the structure of will, valuing, happiness, and care for others (in empathy and sympathy). Historically, though, ethics has been on the horizon of phenomenology. Husserl largely avoided ethics in his major works, though he featured the role of practical concerns in the structure of the life~world or of Geist (spirit, or culture, as in Zeitgeist). He once delivered a course of lectures giving ethics (like logic) a basic place in philosophy, indicating the importance of the phenomenology of sympathy in grounding ethics. In Being and Time Heidegger claimed not to pursue ethics while discussing phenomena ranging from care, conscience, and guilt to ‘fallenness’ and ‘authenticity’ (all phenomena with theological echoes). In Being and Nothingness Sartre Analysed with subtlety the logical problem of ‘bad faith,’ yet he developed an ontology of value as produced by willing in good faith (which sounds like a revised Kant’sian foundation for morality). Beauvoir sketched an existentialist ethics, and Sartre left unpublished notebooks on ethics. However, an explicit Phenomenological approach to ethics emerged in the works of Emannuel Levinas, a Lithuanian phenomenologist who heard Husserl and Heidegger in Freiburg before moving to Paris. In Totality and Infinity (1961), modifying themes drawn from Husserl and Heidegger, Levinas focussed on the significance of the ‘face’ of the other, explicitly developing grounds for ethics in this range of phenomenology, writing an impressionistic style of prose with allusions to religious experience.

Allied with ethics are political and social philosophies. Sartre and Merleau~Ponty were politically engaged, in 1940s Paris and their existential philosophies (phenomenologically based) suggest a political theory based in individual freedom. Sartre later sought an explicit blend of existentialism with Marxism. Still, political theory has remained on the borders of phenomenology. Social theory, however, has been closer to phenomenology as such. Husserl Analysed the Phenomenological structure of the life~world and Geist generally, including our role in social activity. Heidegger stressed social practice, which he found more primordial than individual consciousness. Alfred Schutz developed a phenomenology of the social world. Sartre continued the Phenomenological appraisal of the meaning of the other, the fundamental social formation. Moving outward from Phenomenological issues, Michel Foucault studied the genesis and meaning of social institutions, from prisons to insane asylums. And Jacques Derrida has long practised a kind of phenomenology of language, pursuing sociologic meaning in the ‘deconstruction’ of wide~ranging texts. Aspects of French ‘poststructuralist’ theory are sometimes interpreted as broadly Phenomenological, but such issues are beyond the present purview.

Classical phenomenology, then, ties into certain areas of epistemology, logic, and ontology, and leads into parts of ethical, social, and political theory.

It ought to be obvious that phenomenology has a lot to say in the area called philosophy of mind. Yet the traditions of phenomenology and analytic philosophy of mind have not been closely joined, despite overlapping areas of interest. So it is appropriate to close this survey of phenomenology by addressing philosophy of mind, one of the most vigorously debated areas in recent philosophy.

The tradition of analytic philosophy began, early in the 20th century, with analyses of language, notably in the works of Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein. Then in The Concept of Mind (1949) Gilbert Ryle developed a series of analyses of language about different mental states, including sensation, belief, and will. Though Ryle is commonly deemed a philosopher of ordinary language, Ryle himself said The Concept of Mind could be called phenomenology. In effect, Ryle Analysed our Phenomenological understanding of mental states as reflected in ordinary language about the mind. From this linguistic phenomenology Ryle argued that Cartesian mind~body dualism involves a category mistake (the logic or grammar of mental verbs~’believe,’ ‘see,’ etc.~does not mean that we ascribe belief, sensation, etc., to ‘the ghost in the machine’). With Ryle's rejection of mind~body dualism, the mind~body problem was re~awakened: What is the ontology of mind/body, and how are mind and body related?

René Descartes, in his epoch~making Meditations on First Philosophy (1641), had argued that minds and bodies are two distinct kinds of being or substance with two distinct kinds of attributes or modes: Bodies are characterized by spatiotemporal physical properties, while minds are characterized by properties of thinking (including seeing, feeling, etc.). Centuries later, phenomenology would find, with Brentano and Husserl, that mental acts are characterized by consciousness and intentionality, while natural science would find that physical systems are characterized by mass and force, ultimately by gravitational, electromagnetic, and quantum fields. Where do we find consciousness and intentionality in the quantum~electromagnetic~gravitational field that, by hypothesis, orders everything in the natural world in which we humans and our minds exist? That is the mind~body problem today. In short, phenomenology by any other name lies at the heart of the contemporary, mind~body problem.

After Ryle, philosophers sought a more explicit and generally naturalistic ontology of mind. In the 1950s materialism was argued anew, urging that mental states are identical with states of the central nervous system. The classical identity theory holds that each token mental state (in a particular person's mind at a particular time) is identical with a token brain state (in that a person's brain at that time). The weaker of materialisms, holds instead, that each type of mental state is identical with a type of brain state. But materialism does not fit comfortably with phenomenology. For it is not obvious how conscious mental states as we experience them~sensations, thoughts, emotions~can simply be the complex neural states that somehow subserve or implement them. If mental states and neural states are simply identical, in token or in type, where in our scientific theory of mind does the phenomenology occur~is it not simply replaced by neuroscience? And yet experience is part of what is to be explained by neuroscience.

In the late 1960s and 1970s the computer model of mind set it, and functionalism became the dominant model of mind. On this model, mind is not what the brain consists in (electrochemical transactions in neurons in vast complexes). Instead, mind is what brains do: They are function of mediating between information coming into the organism and behaviour proceeding from the organism. Thus, a mental state is a functional state of the brain or of the human (or an animal) organism. More specifically, on a favourite variation of functionalism, the mind is a computing system: Mind is to brain as software is to hardware; Thoughts are just programs running on the brain's ‘NetWare.’ Since the 1970s the cognitive sciences~from experimental studies of cognition to neuroscience~have tended toward a mix of materialism and functionalism. Gradually, however, philosophers found that Phenomenological aspects of the mind pose problems for the functionalist paradigm too.

In the early 1970s Thomas Nagel argued in ‘What Is It Like to Be a Bat?’ (1974) that consciousness itself~especially the subjective character of what it is like to have a certain type of experience~escapes physical theory. Many philosophers pressed the case that sensory qualia~what it is like to feel pain, to see red, etc.~are not addressed or explained by a physical account of either brain structure or brain function. Consciousness has properties of its own. And yet, we know, it is closely tied to the brain. And, at some level of description, neural activities implement computation.

In the 1980s John Searle argued in Intentionality (1983) (and further in The Rediscovery of the Mind (1991)) that intentionality and consciousness are essential properties of mental states. For Searle, our brains produce mental states with properties of consciousness and intentionality, and this is all part of our biology, yet consciousness and intentionality require to ‘first~person’ ontology. Searle also argued that computers simulate but do not have mental states characterized by intentionality. As Searle argued, a computer system has a syntax (processing symbols of certain shapes) but has no semantics (the symbols lack meaning: we interpret the symbols). In this way Searle rejected both materialism and functionalism, while insisting that mind is a biological property of organisms like us: our brains ‘secrete’ consciousness

The analysis of consciousness and intentionality is central to phenomenology as appraised above, and Searle's theory of intentionality reads like a modernized version of Husserl's. (Contemporary logical theory takes the form of stating truth conditions for propositions, and Searle characterizes a mental state's intentionality by specifying its ‘satisfaction conditions’). However, there is an important difference in background theory. For Searle explicitly assumes the basic worldview of natural science, holding that consciousness is part of nature. But Husserl explicitly brackets that assumption, and later phenomenologists~including Heidegger, Sartre, Merleau~Ponty~seem to seek a certain sanctuary for phenomenology beyond the natural sciences. And yet phenomenology itself should be largely neutral about further theories of how experience arises, notably from brain activity.

The philosophy or theory of mind overall may be factored into the following disciplines or ranges of theory relevant to mind: Phenomenology studies conscious experience as experienced, analysing the structure~the types, intentional forms and meanings, dynamics, and (certain) enabling conditions~of perception, thought, imagination, emotion, and volition and action.

Neuroscience studies the neural activities that serve as biological substrate to the various types of mental activity, including conscious experience. Neuroscience will be framed by evolutionary biology (explaining how neural phenomena evolved) and ultimately by basic physics (explaining how biological phenomena are grounded in physical phenomena). Here lie the intricacies of the natural sciences. Part of what the sciences are accountable for is the structure of experience, Analysed by phenomenology.

Cultural analysis studies the social practices that help to shape or serve as cultural substrate of the various types of mental activity, including conscious experience. Here we study the import of language and other social practices.

Ontology of mind studies the ontological type of mental activity in general, ranging from perception (which involves causal input from environment to experience) to volitional action (which involves causal output from volition to bodily movement).

This division of labour in the theory of mind can be seen as an extension of Brentano's original distinction between descriptive and genetic psychology. Phenomenology offers descriptive analyses of mental phenomena, while neuroscience (and wider biology and ultimately physics) offers models of explanation of what causes or gives rise to mental phenomena. Cultural theory offers analyses of social activities and their impact on experience, including ways language shapes our thought, emotion, and motivation. And ontology frames all these results within a basic scheme of the structure of the world, including our own minds.

Meanwhile, from an epistemological standpoint, all these ranges of theory about mind begin with how we observe and reason about and seek to explain phenomena we encounter in the world. And that is where phenomenology begins. Moreover, how we understand each piece of theory, including theory about mind, is central to the theory of intentionality, as it was, the semantics of thought and experience in general. And that is the heart of phenomenology.

The discipline of phenomenology may be defined as the study of structures of experience or consciousness. Literally. , Phenomenology is the

Study of ‘phenomena’: Appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meaning’s things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau~Ponty, Jean~Paul Sartre. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy~as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definition of phenomenology offered above will thus is debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)

In recent philosophy of mind, the term ‘phenomenology’ is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: what it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the Phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our ‘life~world.’

Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro~Anglo~American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in contemporary views, while also highlighting the historical tradition that brought the discipline into its own.

Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called ‘intentionality,’ that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experience remains directed towardly and represented or ‘intends’~things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.

The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or ‘horizonal’ awareness), awareness of one's own experience (self~consciousness, in one sense), self~awareness (awareness~of~oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), purposive intention for its desire for action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life~world (in a particular culture).

Furthermore, in a different dimension, we find various grounds or enabling conditions ~conditions of the possibility~of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focussed on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focussed especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality is grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self~understanding than do the electrochemical workings of our brain, much less our dependence on quantum~mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.

Phenomenology studies structures of conscious experience as experienced from the first~person point of view, along with relevant conditions of experience. The central structure of an experience is its intentionality, the way it is directed through its content or meaning toward a certain object in the world.

We all experience various types of experience including perception, imagination, thought, emotion, desire, volition, and action. Thus, the domain of phenomenology is the range of experiences including these types (among others). Experience includes not only relatively passive experience as in vision or hearing, but also active experience as in walking or hammering a nail or kicking a ball. (The range will be specific to each species of being that enjoys consciousness; Our focus is on our own, human, experience. Not all conscious beings will, or will be able to, practice phenomenology, as we do.)

Conscious experiences have a unique feature: We experience them, we live through them or perform them. Other things in the world we may observe and engage. But we do not experience them, in the sense of living through or performing them. This experiential or first~person feature~that of being experienced ~is an essential part of the nature or structure of conscious experience: as we say, ‘I see / think / desire / do . . .’ This feature is both a Phenomenological and an ontological feature of each experience: it is part of what it is for the experience to be experienced (Phenomenological) and part of what it is for the experience to be (ontological).

How shall we study conscious experience? We reflect on various types of experiences just as we experience them. That is to say, we proceed from the first~person point of view. However, we do not normally characterize an experience at the time we are performing it. In many cases we do not have that capability: a state of intense anger or fear, for example, consumes the entire focus at the time. Rather, we acquire a background of having lived through a given type of experience, and we look to our familiarity with that type of experience: While hearing a song, seeing the sun set, thinking about love, intending to jump a hurdle. The practice of phenomenology assumes such familiarity with the type of experiences to be characterized. Importantly, it is atypical of experience that phenomenology pursues, rather than a particular fleeting experience~unless its type is what interests us.

Classical phenomenologists practised some three distinguishable methods. (1) We describe a type of experience just as we find it in our own (past) experience. Thus, Husserl and Merleau~Ponty spoke of pure description of lived experience. (2) We interpret a type of experience by relating it to relevant features of context. In this vein, Heidegger and his followers spoke of hermeneutics, the art of interpretation in context, especially social and linguistic context. (3) We analyse the form of a type of experience. In the end, all the classical phenomenologists practised analysis of experience, factoring out notable features for further elaboration.

These traditional methods have been ramified in recent decades, expanding the methods available to phenomenology. Thus: (4) In a logico~semantic model of phenomenology, we specify the truth conditions for a type of thinking (say, where I think that dogs chase cats) or the satisfaction conditions for a type of intention (say, where I intend or will to jump that hurdle). (5) In the experimental paradigm of cognitive neuroscience, we design empirical experiments that tend to confirm or refute aspects of experience (say, where a brain scan shows electrochemical activity in a specific region of the brain thought to subserve a type of vision or emotion or motor control). This style of ‘neurophenomenology’ assumes that conscious experience is grounded in neural activity in embodied action in appropriate surroundings~mixing pure phenomenology with biological and physical science in a way that was not wholly congenial to traditional phenomenologists.

What makes an experience conscious is a certain awareness one has of the experience while living through or performing it. This form of inner awareness has been a topic of considerable debate, centuries after the issue arose with Locke's notion of self~consciousness on the heels of Descartes' sense of consciousness (conscience, co~knowledge). Does this awareness~of~experience consist in a kind of inner observation of the experience, as if one were doing two things at once? (Brentano argued no.) Is it a higher~order perception of one's mind's operation, or is it a higher~order thought about one's mental activity? (Recent theorists have proposed both.) Or is it a different form of inherent structure? (Sartre took this line, drawing on Brentano and Husserl.) These issues are beyond the scope of this article, but notice that these results of Phenomenological analysis shape the characterization of the domain of study and the methodology appropriate to the domain. For awareness~of~experience is a defining trait of conscious experience, the trait that gives experience a first~person, lived character. It is that a living characterization resembling its self, that life is to offer the experience through which allows a first~person perspective on the object of study, namely, experience, and that perspective is characteristic of the methodology of phenomenology.

Conscious experience is the starting point of phenomenology, but experience shades off into fewer overtly conscious phenomena. As Husserl and others stressed, we are only vaguely aware of things in the margin or periphery of attention, and we are only implicitly aware of the wider horizon of things in the world around us. Moreover, as Heidegger stressed, in practical activities like walking along, or hammering a nail, or speaking our native tongue, we are not explicitly conscious of our habitual patterns of action. Furthermore, as psychoanalysts have stressed, much of our intentional mental activity is not conscious at all, but may become conscious in the process of therapy or interrogation, as we come to realize how we feel or think about something. We should allow, then, that the domain of phenomenology ~our own experience~spreads out from conscious experience into semiconscious and even unconscious mental activity, along with relevant background conditions implicitly invoked in our experience. (These issues are subject to debate; the point here is to open the door to the question of where to draw the boundary of the domain of phenomenology.)

To begin an elementary exercise in phenomenology, consider some typical experiences one might have in everyday life, characterized in the first person: (1) ‘I’ witnesses that fishing boat off the coast as dusk descends over the Pacific. (2) I hear that helicopter whirring overhead as it approaches the hospital. (3) I am thinking that phenomenology differs from psychology. (4) I wish that warm rain from Mexico were falling like last week. (5) I imagine a fearsome creature like that in my nightmare. (6) I intend to finish my writing by noon. (7) I walk carefully around the broken glass on the sidewalk. (8) I stroke a backhand cross~court with that certain underspin. (9) I am searching for the words to make my point in conversation.

Here are rudimentary characterizations of some familiar types of experience. Each sentence is a simple form of Phenomenological description, articulating in everyday English the structure of the type of experience so described. The subject term of ‘I,’ indicates the first~person structure of the experience: The intentionality proceeds from the subject. As the verb indicates, the type of intentional activity so described, as perception, thought, imagination, etc. Of central importance is the way that objects of awareness are presented or intended in our experiences, especially, the way we see or conceive or think about objects. The direct~object expression (‘that fishing boat off the coast’) articulates the mode of presentation of the object in the experience: The content or meaning of the experience, the core of what Husser called noema. In effect, the object~phrase expresses the noema of the act described, that is, to the extent that language has appropriate expressive power. The overall form of the given sentence articulates of a basic form of intentionality, in that of an experience has to its own subject~act~content~object.

Fruitful Phenomenological description or interpretation, as in Husserl or Merleau~Ponty, will far outrun such simple Phenomenological descriptions as above. But such simple descriptions bring out the basic form of intentionality. As we interpret the Phenomenological description further, we may assess the relevance of the context of experience. And we may turn to wider conditions of the possibility of that type of experience. In this way, in the practice of phenomenology, we classify, describe, interpret, and analyse structures of experiences in ways that answer to our own experience.

In such interpretive~descriptive analyses of experience, we immediately observe that we are analysing familiar forms of consciousness, conscious experience of or about this or that. Intentionality is thus the salient structure of our experience, and much of the phenomenology proceeds as the study of different aspects of intentionality. Thus, we explore structures of the stream of consciousness, the enduring self, the embodied self, and bodily action. Furthermore, as we reflect on how these phenomena work, we turn to the analysis of relevant conditions that enable our experiences to occur as they do, and to represent or intend as they do. Phenomenology then leads into analyses of conditions of the possibility of intentionality, conditions involving motor skills and habits, backgrounding to social practices, and often language, with its special place in human affairs. The Oxford English Dictionary presents the following definition: ‘Phenomenology. (i) The science of phenomena as distinct from being (ontology). (ii) That division of any science that describes and classifies its phenomena. From the Greek phainomenon, appearance.’ In philosophy, the term is used in the first sense, amid debates of theory and methodology. In physics and philosophy of science, the term is used in the second sense, even if only occasionally.

In its root meaning, then, phenomenology is the study of phenomena: Literally, appearances as opposed to reality. This ancient distinction launched philosophy as we emerged from Plato's cave. Yet the discipline of phenomenology did not blossom until the 20th century and remains poorly understood in many circles of contemporary philosophy. What is that discipline? How did philosophy move from a root concept of phenomena to the discipline of phenomenology?

Originally, in the 18th century, ‘phenomenology’ meant the theory of appearances fundamental to empirical knowledge, especially sensory appearances. The term seems to have been introduced by Johann Heinrich Lambert, a follower of Christian Wolff. Subsequently, Immanuel Kant’s used the term occasionally in various writings, as did Johann Gottlieb Fichte and G. W. F. Hegel. By 1889 Franz Brentano used the term to characterize what he called ‘descriptive psychology.’ From there Edmund Husserl took up the term for his new science of consciousness, and the rest is history.

Suppose we say phenomenology study’s phenomena: what appears to us~and its appearing? How shall we understand phenomena? The term has a rich history in recent centuries, in which we can see traces of the emerging discipline of phenomenology.

In a strict empiricist vein, what appears before the mind are sensory data or qualia: either patterns of one's own sensations (seeing red here now, feeling this ticklish feeling, hearing that resonant bass tone) or sensible patterns of worldly things, say, the looks and smells of flowers (what John Locke called secondary qualities of things). In a strict rationalist vein, by contrast, what appears before the mind are ideas, rationally formed ‘clear and distinct ideas’ (in René Descartes' ideal). In Immanuel Kant’s's theory of knowledge, fusing rationalist and empiricist aims, what appears to the mind are phenomena defined as things~as~they~appear or things~as~they~are~represented (in a synthesis of sensory and conceptual forms of objects~as~known). In Auguste Comte's theory of science, phenomena (phenomenes) are the facts (faits, what occurs) that a given science would explain.

In 18th and 19th century epistemology, then, phenomena are the starting points in building knowledge, especially science. Accordingly, in a familiar and still current sense, phenomena are whatever we observe (perceive) and seek to explain.

As the discipline of psychology emerged late in the 19th century, however, phenomena took on a somewhat different guise. In Franz Brentano's Psychology from an Empirical Standpoint (1874), phenomena are of what occurs in the mind: Mental phenomena are acts of consciousness (or their contents), and physical phenomena are objects of external perception starting with colours and shapes. For Brentano, physical phenomena exist ‘intentionally’ in acts of consciousness. This view revives a Medieval notion Brentano called ‘intentional in~existence. However, the ontology remains undeveloped (what is it to exist in the mind, and do physical objects exist only in the mind?). Moreover, phenomenons are whatever we are conscious of, as a phenomenon might that its events lay succumbantly around us, other people, ourselves. Even (in reflection) our own conscious experiences, as we experience these. In a certain technical sense, phenomena are things as they are given to our consciousness, whether in perception or imagination or thought or volition. This conception of phenomena would soon inform the new discipline of phenomenology.

Brentano distinguished descriptive psychology from genetic psychology. Where genetic psychology seeks the causes of various types of mental phenomena, descriptive psychology defines and classifies the various types of mental phenomena, including perception, judgment, emotion, etc. According to Brentano, every mental phenomenon, or act of consciousness, is directed toward some object, and only mental phenomena are so directed. This thesis of intentional directedness was the hallmark of Brentano's descriptive psychology. In 1889 Brentano used the term ‘phenomenology’ for descriptive psychology, and the way was paved for Husserl's new science of phenomenology.

Phenomenology as we know it was launched by Edmund Husserl in his Logical Investigations (1900~01). Two importantly different lines of theory came together in that monumental work: Psychological theory, on the heels of Franz Brentano (and William James, whose Principles of Psychology appeared in 1891 and greatly impressed Husserl); And logical or semantic theory, on the heels of Bernard Bolzano and Hussserl's contemporaries who founded modern logic, including Gottlob Frege. (Interestingly, both lines of research trace back to Aristotle, and both reached importantly new results in Hussserl's day.)

Hussserl's Logical Investigations was inspired by Bolzano's ideal of logic, while taking up Brentano's conception of descriptive psychology. In his Theory of Science (1835) Bolzano distinguished between subjective and objective ideas or representations (Vorstellungen). In effect Bolzano criticized Kant’s and before him the classical empiricists and rationalists for failing to make this sort of distinction, thereby rendering phenomena merely subjective. Logic studies objective ideas, including propositions, which in turn make up objective theories as in the sciences. Psychology would, by contrast, study subjective ideas, the concrete contents (occurrences) of mental activities in particular minds at a given time. Husserl was after both, within a single discipline. So phenomena must be reconceived as objective intentional contents (sometimes called intentional objects) of subjective acts of consciousness. Phenomenology would then study this complex of consciousness and correlated phenomena. In Ideas I (Book One, 1913) Husserl introduced two Greek words to capture his version of the Bolzanoan distinction: noesis and noema (from the Greek verb noéaw, meaning to perceive, think, intend, from what place the noun nous or mind). The intentional process of consciousness is called noesis, while its ideal content is called noema. The noema of an act of consciousness Husserl characterized both as an ideal meaning and as ‘the object as intended.’ Thus the phenomenon, or object~as~it~appears, becomes the noema, or object~as~it~is~intended. The interpretations of Husserl's theory of noema have been several and amount to different developments of Husserl's basic theory of intentionality. (Is the noema an aspect of the object intended, or rather a medium of intention?)

For Husserl, then, phenomenology integrates a kind of psychology with a kind of logic. It develops a descriptive or analytic psychology in that it describes and analytical divisions of subjective mental activity or experience, in short, acts of consciousness. Yet it develops a kind of logic~a theory of meaning (today we say logical semantics) ~by that, it describes and approves to analytical justification that an objective content of consciousness, brings forthwith the ideas, concepts, images, propositions, in short, ideal meanings of various types that serve as intentional contents, or noematic meanings, of various types of experience. These contents are shareable by different acts of consciousness, and in that sense they are objective, ideal meanings. Following Bolzano (and to some extent the platonistic logician Hermann Lotze), Husserl opposed any reduction of logic or mathematics or science to mere psychology, to how human beings happen to think, and in the same spirit he distinguished phenomenology from mere psychology. For Husserl, phenomenology would study consciousness without reducing the objective and shareable meanings that inhabit experience to merely subjective happenstances. Ideal meaning would be the engine of intentionality in acts of consciousness.

A clear conception of phenomenology awaited Husserl's development of a clear model of intentionality. Indeed, phenomenology and the modern concept of intentionality emerged hand~in~hand in Husserl's Logical Investigations (1900~01). With theoretical foundations laid in the Investigations, Husserl would then promote the radical new science of phenomenology in Ideas. And alternative visions of phenomenology would soon follow.

Phenomenology came into its own with Husserl, much as epistemology came into its own with Descartes, and ontology or metaphysics came into its own with Aristotle on the heels of Plato. Yet phenomenology has been practised, with or without the name, for many centuries. When Hindu and Buddhist philosophers reflected on states of consciousness achieved in a variety of meditative states, they were practising phenomenology. When Descartes, Hume, and Kant’s characterized states of perception, thought, and imagination, they were practising phenomenology. When Brentano classified varieties of mental phenomena (defined by the directedness of consciousness), he was practising phenomenology. When William James appraised kinds of mental activity in the stream of consciousness (including their embodiment and their dependence on habit), he too was practising phenomenology. And when recent analytic philosophers of mind have addressed issues of consciousness and intentionality, they have often been practising phenomenology. Still, the discipline of phenomenology, its roots tracing back through the centuries, came full to flower in Husserl.

Husserl's work was followed by a flurry of Phenomenological writing in the first half of the 20th century. The diversity of traditional phenomenology is apparent in the Encyclopaedia of Phenomenology (Kluwer Academic Publishers, 1997, Dordrecht and Boston), which features separate articles on some seven types of phenomenology. (1) Transcendental constitutive phenomenology studies how objects are constituted in pure or transcendental consciousness, setting aside questions of any relation to the natural world around us. (2) Naturalistic constitutive phenomenology studies how consciousness constitutes or takes things in the world of nature, assuming with the natural attitude that consciousness is part of nature. (3) Existential phenomenology studies concrete human existence, including our experience of free choice or action in concrete situations. (4) Generative historicist phenomenology studies how meaning, as found in our experience, is generated in historical processes of collective experience over time. (5) Genetic phenomenology studies the genesis of meanings of things within one's own stream of experience. (6) Hermeneutical phenomenology studies interpretive structures of experience, how we understand and engage things around us in our human world, including ourselves and others. (7) Realistic phenomenology studies the structure of consciousness and intentionality, assuming it occurs in a real world that is largely external to consciousness and not somehow brought into being by consciousness.

The most famous of the classical phenomenologists were Husserl, Heidegger, Sartre, and Merleau~Ponty. In these four thinkers we find different conceptions of phenomenology, different methods, and different results. A brief sketch of their differences will capture both a crucial period in the history of phenomenology and a sense of the diversity of the field of phenomenology.

In his Logical Investigations (1900~01) Husserl outlined a complex system of philosophy, moving from logic to philosophy of language, to ontology (theory of universals and parts of wholes), to a Phenomenological theory of intentionality, and finally to a Phenomenological theory of knowledge. Then in Ideas I (1913) he focussed squarely on phenomenology itself. Husserl defined phenomenology as ‘the science of the essence of consciousness,’ entered on the defining trait of intentionality, approached explicitly ‘in the first person.’ In this spirit, we may say phenomenology is the study of consciousness~that is, conscious experience of various types~as experienced from the first~person point of view. In this discipline we study different forms of experience just as we experience them, from the perspective of its topic for living through or performing them. Thus, we characterize experiences of seeing, hearing, imagining, thinking, feeling (i.e., emotion), wishing, desiring, willing, and acting, that is, embodied volitional activities of walking, talking, cooking, carpentering, etc. However, not just any characterization of an experience will do. Phenomenological analysis of a given type of experience will feature the ways in which we ourselves would experience that form of conscious activity. And the leading property of our familiar types of experience is their intentionality, their being a consciousness of or about something, something experienced or presented or engaged in a certain way. How I see or conceptualize or understand the object I am dealing with defines the meaning of that object in my current experience. Thus, phenomenology features a study of meaning, in a wide sense that includes more than what is expressed in language.

In Ideas, Husserl presented phenomenology with a transcendental turn. In part this means that Husserl took on the Kant’sian idiom of ‘transcendental idealism,’ looking for conditions of the possibility of knowledge, or of consciousness generally, and arguably turning away from any reality beyond phenomena. But Hussserl's transcendental, and turns to involve his discovery of the method of epoché (from the Greek skeptics' notion of abstaining from belief). We are to practice phenomenology, Husserl proposed, by ‘bracketing’ the question of the existence of the natural world around us. We thereby turn our attention, in reflection, to the structure of our own conscious experience. Our first key result is the observation that each act of consciousness is a consciousness of something, that is, intentional, or directed toward something. Consider my visual experience wherein I see a tree across the square. In Phenomenological reflection, we need not concern ourselves with whether the tree exists: my experience is of a tree whether or not such a tree exists. However, we do need to concern ourselves with how the object is meant or intended. I see a Eucalyptus tree, not a Yucca tree; I see the object as a Eucalyptus tree, with a certain shape, with bark stripping off, etc. Thus, bracketing the tree itself, we turn our attention to my experience of the tree, and specifically to the content or meaning in my experience. This tree~as~perceived Husserl calls the noema or noematic sense of the experience.

Philosophers succeeding Husserl debated the proper characterization of phenomenology, arguing over its results and its methods. Adolf Reinach, an early student of Husserl's (who died in World War I), argued that phenomenology should remain cooperatively affiliated within there be of the view that finds to some associative values among the finer qualities that have to them the realist’s ontology, as in Husserl's Logical Investigations. Roman Ingarden, a Polish phenomenologist of the next generation, continued the resistance to Hussserl's turn to transcendental idealism. For such philosophers, phenomenology should not bracket questions of being or ontology, as the method of epoché would suggest. And they were not alone. Martin Heidegger studied Hussserl's early writings, worked as Assistant to Husserl in 1916, and in 1928 Husserl was to succeed in the prestigious chair at the University of Freiburg. Heidegger had his own ideas about phenomenology.

In Being and Time (1927) Heidegger unfurled his rendition of phenomenology. For Heidegger, we and our activities are always ‘in the world,’ our being is being~in~the~world, so we do not study our activities by bracketing the world, rather we interpret our activities and the meaning things have for us by looking to our contextual relations to things in the world. Indeed, for Heidegger, phenomenology resolves into what he called ‘fundamental ontology.’ We must distinguish beings from their being, and we begin our investigation of the meaning of being in our own case, examining our own existence in the activity of ‘Dasein’ (that being whose being is in each case my own). Heidegger resisted Husserl's neo~Cartesian emphasis on consciousness and subjectivity, including how perception presents things around us. By contrast, Heidegger held that our more basic ways of relating to things are in practical activities like hammering, where the phenomenology reveals our situation in a context of equipment and in being~with~others

In Being and Time Heidegger approached phenomenology, in a quasi~poetic idiom, through the root meanings of ‘logos’ and ‘phenomena,’ so that phenomenology is defined as the art or practice of ‘letting things show themselves.’ In Heidegger's inimitable linguistic play on the Greek roots, ‘phenomenology’ means, . . . to let that which shows itself be seen from themselves in the very way in which it shows itself from itself. Here Heidegger explicitly parodies Hussserl's call, ‘To the things themselves!’, or ‘To the phenomena themselves!’ Heidegger went on to emphasize practical forms of comportment or better relating (Verhalten) as in hammering a nail, as opposed to representational forms of intentionality as in seeing or thinking about a hammer. Being and Time developed an existential interpretation of our modes of being including, famously, our being~toward~death.

In a very different style, in clear analytical prose, in the text of a lecture course called The Basic Problems of Phenomenology (1927), Heidegger traced the question of the meaning of being from Aristotle through many other thinkers into the issues of phenomenology. Our understanding of beings and their being comes ultimately through phenomenology. Here the connection with classical issues of ontology is more apparent, and consonant with Hussserl's vision in the Logical Investigations (an early source of inspiration for Heidegger). One of Heidegger's most innovative ideas was his conception of the ‘ground’ of being, looking to modes of being more fundamental than the things around us (from trees to hammers). Heidegger questioned the contemporary concern with technology, and his writing might suggest that our scientific theories are historical artifacts that we use in technological practice, rather than systems of ideal truth (as Husserl had held). Our deep understanding of being, in our own case, comes rather from phenomenology, Heidegger held.

In the 1930s phenomenology migrated from Austrian and then German philosophy into French philosophy. The way had been paved in Marcel Proust's In Search of Lost Time, in which the narrator recounts in close detail his vivid recollections of experiences, including his famous associations with the smell of freshly baked madeleines. This sensibility to experience traces to Descartes' work, and French phenomenology has been an effort to preserve the central thrust of Descartes' insights while rejecting mind~body dualism. The experience of one's own body, or one's lived or living body, has been an important motif in many French philosophers of the 20th century.

In the novel Nausea (1936) Jean~Paul Sartre described a bizarre course of experience in which the protagonist, writing in the first person, describes how ordinary objects lose their meaning until he encounters pure being at the foot of a chestnut tree, and in that moment recovers his sense of his own freedom. In Being and Nothingness (1943, written partly while a prisoner of war), Sartre developed his conception of Phenomenological ontology. Consciousness is a consciousness of objects, as Husserl had stressed. In Sartre's model of intentionality, the central player in consciousness is a phenomenon, and the occurrence of a phenomenon is just a consciousness~of~an~object. The chestnut tree I see is, for Sartre, such a phenomenon in my consciousness. Indeed, all things in the world, as we normally experience them, are phenomena, beneath or behind which lies their ‘being~in~itself.’ Consciousness, by contrast, has ‘being~for~itself,’ since everything conscious is not only a consciousness~of~its~object but also a pre~reflective consciousness~of~itself (conscience). Yet for Sartre, unlike Husserl, the formal ‘I’ or self is nothing but a sequence of acts of consciousness, notably including radically free choices (like a Humean bundle of perceptions).

For Sartre, the practice of phenomenology proceeds by a deliberate reflection on the structure of consciousness. Sartre's method is in effect a literary style of interpretive description of different types of experience in relevant situations~a practice that does not really fit the methodological proposals of either Husserl or Heidegger, but makes benefit from Sartre's great literary skill. (Sartre wrote many plays and novels and was awarded the Nobel Prize in Literature.)

Sartre's phenomenology in Being and Nothingness became the philosophical foundation for his popular philosophy of existentialism, sketched in his famous lecture ‘Existentialism is a Humanism’ (1945). In Being and Nothingness Sartre emphasized the experience of freedom of choice, especially the project of choosing oneself, the defining pattern of one's past actions. Through vivid description of the ‘look’ of the Other, Sartre laid groundwork for the contemporary political significance of the concept of the Other (as in other groups or ethnicities). Indeed, in The Second Sex (1949) Simone de Beauvoir, Sartre's life~long companion, launched contemporary feminism with her nuance account of the perceived role of women as Other.

In 1940s Paris, Maurice Merleau~Ponty joined with Sartre and Beauvoir in developing phenomenology. In Phenomenology of Perception (1945) Merleau~Ponty developed a rich variety of phenomenology emphasizing the role of the body in human experience. Unlike Husserl, Heidegger, and Sartre, Merleau~Ponty looked to experimental psychology, analysing the reported experience of amputees who felt sensations in a phantom limb. Merleau~Ponty rejected both associationist psychology, focussed on correlations between sensation and stimulus, and intellectualist psychology, focussed on rational construction of the world in the mind. (Think of the behaviorist and computationalist models of mind in more recent decades of empirical psychology.) Instead, Merleau~Ponty focussed on the ‘body image,’ our experience of our own body and its significance in our activities. Extending Hussserl's account of the lived body (as opposed to the physical body), Merleau~Ponty resisted the traditional Cartesian separation of mind and body. For the body image is neither in the mental realm nor in the mechanical~physical realm. Rather, my body is, as it were, me in my engaged action with things I perceive including other people.

The scope of Phenomenology of Perception is characteristic of the breadth of classical phenomenology, not least because Merleau~Ponty drew (with generosity) on Husserl, Heidegger, and Sartre while fashioning his own innovative vision of phenomenology. His phenomenology addressed the role of attention in the phenomenal field, the experience of the body, the spatiality of the body, the motility of the body, the body in sexual being and in speech, other selves, temporality, and the character of freedom so important in French existentialism. Near the end of a chapter on the Cogito (Descartes' ‘I think, therefore I am’), Merleau~Ponty succinctly captures his embodied, existential form of phenomenology, writing: Insofar as, when I reflect on the essence of subjectivity, I find it bound up with that of the body and that of the world, this is because my existence as subjectivity [= consciousness] is merely one with my existence as a body and with the existence of the world, and because the subject that I am, when appropriated concrete, it is inseparable from this body and this world.

In short, consciousness is embodied (in the world), and equally body is infused with consciousness (with cognition of the world).

In the years since Hussserl, Heidegger, et al, wrote that its topic or ways of conventional study are to phenomenologists of having in accord dug into all these classical disseminations that include, intentionality, temporal awareness, intersubjectivity, practical intentionality, and the social and linguistic contexts of human activity. Interpretation of historical texts by Husserl et al. has played a prominent role in this work, both because the texts are rich and difficult and because the historical dimension is itself part of the practice of continental European philosophy. Since the 1960s, philosophers trained in the methods of analytic philosophy have also dug into the foundations of phenomenology, with an eye to 20th century work in philosophy of logic, language, and mind.

Phenomenology was already linked with logical and semantic theory in Husserl's Logical Investigations. Analytic phenomenology picks up on that connection. In particular, Dagfinn F¿llesdal and J. N. Mohanty have explored historical and conceptual relations between Husserl's phenomenology and Frége's logical semantics (in Frége's ‘On Sense and Reference,’ 1892). For Frége, an expression refers to an object by way of a sense: Thus, two expressions (say, ‘the morning star’ and ‘the evening star’) may refer to the same object (Venus) but express different senses with different manners of presentation. For Husserl, similarly, an experience (or act of consciousness) intends or refers to an object by way of a noema or noematic sense: Consequently, two experiences may refer to the same object but have different noematic senses involving different ways of presenting the object (for example, in seeing the same object from different sides). Indeed, for Husserl, the theory of intentionality is a generalization of the theory of linguistic reference: as linguistic reference is mediated by sense, so intentional reference is mediated by noematic sense.

More recently, analytic philosophers of mind have rediscovered Phenomenological issues of mental representation, intentionality, consciousness, sensory experience, intentional content, and context~of~thought. Some of these analytic philosophers of mind hark back to William James and Franz Brentano at the origins of modern psychology, and some look to empirical research in today's cognitive neuroscience. Some researchers have begun to combine Phenomenological issues with issues of neuroscience and behavioural studies and mathematical modelling. Such studies will extend the methods of traditional phenomenology as the Zeitgeist moves on.

The discipline of phenomenology forms one basic field in philosophy among others. How is phenomenology distinguished from, and related to, other fields in philosophy?

Traditionally, philosophy includes at least four core fields or disciplines: Ontology, epistemology, ethics, logic presupposes phenomenology as it joins that list. Consider then these elementary definitions of field: (1) Ontology is the study of beings or their being~what is. (2) Epistemology is the study of knowledge~how we know. (3) Logic is the study of valid reasoning~how to reason. (4) Ethics is the study of right and wrong~how we should act. (5) Phenomenology is the study of our experience~how we experience.

The domains of study in these five fields are clearly different, and they seem to call for different methods of study.

Philosophers have sometimes argued that one of these fields is ‘first philosophy,’ the most fundamental discipline, on which all philosophy or all knowledge or wisdom rests. Historically (it may be argued), Socrates and Plato put ethics first, then Aristotle put metaphysics or ontology first, then Descartes put epistemology first, then Russell put logic first, and then Husserl (in his later transcendental phase) put phenomenology first.

Consider epistemology. As we saw, phenomenology helps to define the phenomena on which knowledge claims rest, according to modern epistemology. On the other hand, phenomenology itself claims to achieve knowledge about the nature of consciousness, a distinctive description of first~person knowledge, through a form of intuition.

Consider logic saw being a logical theory of meaning, in that this had persuaded Husserl into the theory of intentionality, the heart of phenomenology. On one account, phenomenology explicates the intentional or semantic force of ideal meanings, and propositional meanings are central to logical theory. But logical structure is expressed in language, either ordinary language or symbolic languages like those of predicate logic or mathematics or computer systems. It remains an important issue of debate where and whether language shapes specific forms of experience (thought, perception, emotion) and their content or meaning. So there is an important (if disputed) relation between phenomenology and logico~linguistic theory, especially philosophical logic and philosophy of language (as opposed to mathematical logic per se).

Consider ontology. Phenomenology studies (among other things) the nature of consciousness, which is a central issue in metaphysics or ontology, and one that lead into the traditional mind~body problem. Husserlian methodology would bracket the question of the existence of the surrounding world, thereby separating phenomenology from the ontology of the world. Yet Husserl's phenomenology presupposes theory about species and individuals (universals and particulars), relations of part and whole, and ideal meanings~all parts of ontology.

Now consider ethics. Phenomenology might play a role in ethics by offering analyses of the structure of will, valuing, happiness, and care for others (in empathy and sympathy). Historically, though, ethics has been on the horizon of phenomenology. Husserl largely avoided ethics in his major works, though he featured the role of practical concerns in the structure of the life~world or of Geist (spirit, or culture, as in Zeitgeist). He once delivered a course of lectures giving ethics (like logic) a basic place in philosophy, indicating the importance of the phenomenology of sympathy in grounding ethics. In Being and Time Heidegger claimed not to pursue ethics while discussing phenomena ranging from care, conscience, and guilt to ‘fallenness’ and ‘authenticity’ (all phenomena with theological echoes). In Being and Nothingness Sartre analysed with subtlety the logical problem of ‘bad faith,’ yet he developed an ontology of value as produced by willing in good faith (which sounds like a revised Kant’sian foundation for morality). Beauvoir sketched an existentialist ethics, and Sartre left unpublished notebooks on ethics. However, an explicit Phenomenological approach to ethics emerged in the works of Emannuel Levinas, a Lithuanian phenomenologist who heard Husserl and Heidegger in Freiburg before moving to Paris. In Totality and Infinity (1961), modifying themes drawn from Husserl and Heidegger, Levinas focussed on the significance of the ‘face’ of the other, explicitly developing grounds for ethics in this range of phenomenology, writing an impressionistic style of prose with allusions to religious experience.

Allied with ethics that on the same line, signify political and social philosophy. Sartre and Merleau~Ponty were politically captivated in 1940s Paris, and their existential philosophies (phenomenologically based) suggest a political theory based in individual freedom. Sartre later sought an explicit blend of existentialism with Marxism. Still, political theory has remained on the borders of phenomenology. Social theory, however, has been closer to phenomenology as such. Husserl analysed the Phenomenological structure of the life~world and Geist generally, including our role in social activity. Heidegger stressed social practice, which he found more primordial than individual consciousness. Alfred Schutz developed a phenomenology of the social world. Sartre continued the Phenomenological appraisal of the meaning of the other, the fundamental social formation. Moving outward from Phenomenological issues, Michel Foucault studied the genesis and meaning of social institutions, from prisons to insane asylums. And Jacques Derrida has long practised a kind of phenomenology of language, seeking socially meaning in the ‘deconstruction’ of wide~ranging texts. Aspects of French ‘poststructuralist’ theory are sometimes interpreted as broadly Phenomenological, but such issues are beyond the present purview.

Classical phenomenology, then, ties into certain areas of epistemology, logic, and ontology, and leads into parts of ethical, social, and political theory.

It ought to be obvious that phenomenology has a lot to say in the area called philosophy of mind. Yet the traditions of phenomenology and analytic philosophy of mind have not been closely joined, despite overlapping areas of interest. So it is appropriate to close this survey of phenomenology by addressing philosophy of mind, one of the most vigorously debated areas in recent philosophy.

The tradition of analytic philosophy began, early in the 20th century, with analyses of language, notably in the works of Gottlob Frége, Bertrand Russell, and Ludwig Wittgenstein. Then in The Concept of Mind (1949) Gilbert Ryle developed a series of analyses of language about different mental states, including sensation, belief, and will. Though Ryle is commonly deemed a philosopher of ordinary language, Ryle himself said The Concept of Mind could be called phenomenology. In effect, Ryle analysed our Phenomenological understanding of mental states as reflected in ordinary language about the mind. From this linguistic phenomenology Ryle argued that Cartesian mind~body dualism involves a category mistake (the logic or grammar of mental verbs~’believe,’ ‘see,’ etc. ~does not mean that we ascribe belief, sensation, etc., to ‘the ghost in the machine’). With Ryle's rejection of mind~body dualism, the mind~body problem was re~awakened: What is the ontology of mind/body, and how are mind and body related?

René Descartes, in his epoch~making Meditations on First Philosophy (1641), had argued that minds and bodies are two distinct kinds of being or substance with two distinct kinds of attributes or modes: bodies are characterized by spatiotemporal physical properties, while minds are characterized by properties of thinking (including seeing, feeling, etc.). Centuries later, phenomenology would find, with Brentano and Husserl, that mental acts are characterized by consciousness and intentionality, while natural science would find that physical systems are characterized by mass and force, ultimately by gravitational, electromagnetic, and quantum fields. Where do we find consciousness and intentionality in the quantum~electromagnetic~gravitational field that, by hypothesis, orders everything in the natural world in which we humans and our minds exist? That is the mind~body problem today. In short, phenomenology by any other name lies at the heart of the contemporary, mind~body problem.

After Ryle, philosophers sought a more explicit and generally naturalistic ontology of mind. In the 1950s materialism was argued anew, urging that mental states are identical with states of the central nervous system. The classical identity theory holds that each token mental state (in a particular person's mind at a particular time) is identical with a token brain state (in that person's brain at that time). A weaker materialism holds, instead, that each type of mental state is identical with a type of brain state. But materialism does not fit comfortably with phenomenology. For it is not obvious how conscious mental states as we experience them~sensations, thoughts, emotions~can simply be the complex neural states that somehow subserve or implement them. If mental states and neural states are simply identical, in token or in type, where in our scientific theory of mind does the phenomenology occur~is it not simply replaced by neuroscience? And yet experience is part of what is to be explained by neuroscience.

In the late 1960s and 1970s the computer model of mind set it, and functionalism became the dominant model of mind. On this model, mind is not what the brain consists in (electrochemical transactions in neurons in vast complexes). Instead, mind is what brains do: They are function of mediating between information coming into the organism and behaviour proceeding from the organism. Thus, a mental state is a functional state of the brain or of the human or an animal organism. More specifically, on a favourite variation of functionalism, the mind is a computing system: Mind is to brain as software is to hardware; Thoughts are just programs running on the brain's ‘NetWare.’ Since the 1970s the cognitive sciences~from experimental studies of cognition to neuroscience~have tended toward a mix of materialism and functionalism. Gradually, however, philosophers found that Phenomenological aspects of the mind pose problems for the functionalist paradigm too.

In the early 1970s Thomas Nagel argued in ‘What Is It Like to Be a Bat?’ (1974) that consciousness itself~especially the subjective character of what it is like to have a certain type of experience~escapes physical theory. Many philosophers pressed the case that sensory qualia~what it is like to feel pain, to see red, etc.~are not addressed or explained by a physical account of either brain structure or brain function. Consciousness has properties of its own. And yet, we know, it is closely tied to the brain. And, at some level of description, neural activities implement computation.

In the 1980s John Searle argued in Intentionality (1983) (and further in The Rediscovery of the Mind (1991)) that intentionality and consciousness are essential properties of mental states. For Searle, our brains produce mental states with properties of consciousness and intentionality, and this is all part of our biology, yet consciousness and intentionality require the ‘first~person’ ontology. Searle also argued that computers simulate but do not have mental states characterized by intentionality. As Searle argued, a computer system has of the syntax (processing symbols of certain shapes) but has no semantics (the symbols lack meaning: We interpret the symbols). In this way Searle rejected both materialism and functionalism, while insisting that mind is a biological property of organisms like us: Our brains ‘secrete’ consciousness.

The analysis of consciousness and intentionality is central to phenomenology as appraised above, and Searle's theory of intentionality reads like a modernized version of Husserl's. (Contemporary logical theory takes the form of stating truth conditions for propositions, and Searle characterizes a mental state's intentionality by specifying its ‘satisfaction conditions’). However, there is an important difference in background theory. For Searle explicitly assumes the basic worldview of natural science, holding that consciousness is part of nature. But Husserl explicitly brackets that assumption, and later phenomenologists~including Heidegger, Sartre, Merleau~Ponty~seem to seek a certain sanctuary for phenomenology beyond the natural sciences. And yet phenomenology itself should be largely neutral about further theories of how experience arises, notably from brain activity.

The philosophy or theory of mind overall may be factored into the following disciplines or ranges of theory relevant to mind: Phenomenology studies conscious experience as experienced, analysing the structure~the types, intentional forms and meanings, dynamics, and (certain) enabling conditions~of perception, thought, imagination, emotion, and volition and action.

Neuroscience studies the neural activities that serve as biological substrate to the various types of mental activity, including conscious experience. Neuroscience will be framed by evolutionary biology (explaining how neural phenomena evolved) and ultimately by basic physics (explaining how biological phenomena are grounded in physical phenomena). Here lie the intricacies of the natural sciences. Part of what the sciences are accountable for is the structure of experience, analysed by phenomenology.

Cultural analysis studies the social practices that help to shape or serve as cultural substrate of the various types of mental activity, including conscious experience. Here we study the import of language and other social practices. Ontology of mind studies the ontological type of mental activity in general, ranging from perception (which involves causal input from environment to experience) to volitional action (which involves causal output from volition to bodily movement).

This division of labour in the theory of mind can be seen as an extension of Brentano's original distinction between descriptive and genetic psychology. Phenomenology offers descriptive analyses of mental phenomena, while neuroscience (and wider biology and ultimately physics) offers models of explanation of what causes or gives rise to mental phenomena. Cultural theory offers analyses of social activities and their impact on experience, including ways language shapes our thought, emotion, and motivation. And ontology frames all these results within a basic scheme of the structure of the world, including our own minds.

Meanwhile, from an epistemological standpoint, all these ranges of theory about mind begin with how we observe and reason about and seek to explain phenomena we encounter in the world. And that is where phenomenology begins. Moreover, how we understand each piece of theory, including theory about mind, is central to the theory of intentionality, as it were, the semantics of thought and experience in general. And that is the heart of phenomenology.

There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.

Forthwith, is the existence of a historical gap between neuroscience/cognitive science and psychotherapy is being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the ‘unconscious’ and the ‘conscious’ and the relations between the two. It is suggested that these be understood as two independent ‘story generators’~each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.

For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend significant measures of their time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.

The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that ‘human behaviour and all that it entails . . . is a function of the nervous system’ is itself a story used to make sense of observations by some people and not by others. It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, is to explore the implications and significance of the fact that there ARE different stories and that they might be about the same (some)thing

In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new story is itself is a story of conflicting stories within . . . what is called the ‘nervous system’ but others are free to call the ‘self,’ ‘mind,’ ‘soul,’ or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.

Many practising psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities, least of mention, are at primes of irrelevance, and at worst destructive, and the same probable holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line ‘In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal~oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought.’ Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, ‘making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary.’ Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different thing’s ally suspects, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote: ‘The sooner we realize that not to expect of expectation itself, which we would recognize the fact that the complex and higher functional Gestalts that leave the reflex physiologist dumfounded in fact send roots down to the simplest basal functions of the CNS, the sooner we will see that the previously terminologically insurmountable barrier between the lower levels of neurophysiology and higher behavioural theory simply dissolves away.’

And in 1951 another said, ‘I am becoming subsequently forwarded by the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system.’

Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950's to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was ‘simple’ and ‘mechanistic,’ which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with ‘mind,’ or ‘self,’ or ‘soul,’ since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy of course went through its own story evolution over this time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.

An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these ‘shared assumptions’ (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that, rephrasing Gertrude Stein, ‘that there exists an actual there.’ Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of ‘transference’ and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely ‘unconscious,’ and so may cause inappropriate and troubling behaviour in current time seems entirely reasonable. So too is she to think of thoughts that there is an interaction with the analyst, and this can be of some help by bringing the model to ‘consciousness’ through the intermediary of recognizing the transference onto the analyst.

The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms provides a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories productively to modify each other, yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another rather than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non~isomorphisms.

Unconscious stories and ‘reality.’ Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be that of what are not necessarily reflective of the ‘real world.’ Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two ‘stories’ with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally said, that if we could ever see ‘reality,’ but only have stories to describe it that result from processes of which we are not consciously aware.

All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of ‘reality.’ In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who by and large think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well~defined concept of ‘reality’ and, without being fully conscious of it, does in fact do so. And psychotherapists actually make more use of the idea of ‘reality’ than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect ‘traumas’ and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to ‘reality’ in the sense usually meant. They may, in addition, reflect random ‘play’ (Grobstein, 1994), putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between ‘story’ and ‘reality,’ each set of stories could usefully be modified by greater attention to the other. Differing concepts of ‘reality’ (perhaps the very concept itself) gets in the way of usefully sharing stories. The neurobiologists and/or/cognitive scientists' preoccupation with ‘reality’ as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of story in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.

The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the ‘neurobiological unconscious’ is the same thing as the ‘psychotherapeutic unconscious,’ and whether the perceived relations between the ‘unconscious’ and the’conscious’ are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?

An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful ‘rubbing of edges’ between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow ‘superior’ to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi~dimensional story that could help with that common problem and perhaps both traditions as well.

A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling of the world and prediction and error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from threat, approaching good things, generating novel outputs, learning from doing so, and so on.

This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is a terrain so surprisingly rich that it creates, for some people, the inpuzzlement about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.

As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two terrains may in fact turns out to be the same in many ways, but if they are of the same line, it then becomes the question of whether or not it feels in what way nature resembles the ‘unconscious’ and the ‘conscious’ different? Where now are the ‘two stories?’ Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) in what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles several variables at one time. It is likely that there appear to a host of other differences in style as well, in the handling of number for example, and of time.

In the present context, however, perhaps the most important difference in style is one that Lacan called attention to from a clinical/philosophical perspective~the conscious (conscious processing) that has in resemblance to some objective ‘coherence,’ that is, it attempts to create a story that makes sense simultaneously of all its parts. The unconscious, on the other hand, is much more comfortable with bits and pieces lying around with no global order. To a neurobiologist/cognitive scientist, this makes perfectly good sense. The circuitry includes the unconscious (sub~cortical circuitry?) assembly of different parts organized for a large number of different specific purposes, and only secondarily linked together to try to assure some coordination? The circuitry preserves the conscious processings (neo~cortical circuitry?), that, on the other hand, seems to both be more uniform and integrated and to have an objective for which coherence is central.

That central coherence is well~illustrated by the phenomena of ‘positive illusions,’ exemplified by patients who receive a hypnotic suggestion that there is an object in a room and subsequently walk in ways that avoid the object while providing a variety of unrelated explanations for their behaviour. Similar ‘rationalization’ is, of course, seen in schizophrenic patients and in a variety of fewer dramatic forms in psychotherapeutic settings. The ‘coherent’ objective is to make a globally organized story out of the disorganized jumble, a story of (and constituting) the ‘self.’

What this thoroughly suggests is that the mind/brain be actually organized to be constantly generating at least two different stories in two different styles. One, written by conscious processes in simpler terms, is a story of/about the ‘self’ and experienced as such, for developing insights into how such a story can be constructed using neural circuitry. The other is an unconscious ‘story’ about interactions with the world, perhaps better thought of as a series of different ‘models’ about how various actions relate to various consequences. In many ways, the latter is the grist for the former.

In this sense, we are safely back to the two story ideas that has been central to psychotherapy, but perhaps with some added sophistication deriving from neuroscience/cognitive science. In particular, there is no reason to believe that one story is ‘better’ than the other in any definitive sense. They are different stories based on different styles of story telling, with one having advantages in certain sorts of situations (quick responses, large numbers of variables, more direct relation to immediate experiences of pain and pleasure) and the other in other sorts of situations (time for more deliberate responses, challenges amenable to handling using smaller numbers of variables, more coherent, more able to defer immediate gratification/judgment.

In the clinical/psychotherapeutic context, an important implication of the more neutral view of two story~tellers outlined above is that one ought not to over~value the conscious, nor to expect miracles of the process of making conscious what is unconscious. In the immediate context, the issue is if the unconscious is capable of ‘correcting prediction errors,’ then why appeal to the conscious to achieve this function? More generally, what is the function of that persistent aspect of psychotherapy that aspires to make the unconscious conscious? And why is it therapeutically effective when it is? Here, it is worth calling special attention to an aspect of Pally's argument that might otherwise get a bit lost in the details of her article: . . . the therapist encourages the wife to stop consciously and consider her assumption that her husband does not properly care about her, and to effort fully consider an alternative view and inhibit her impulse to reject him back. This, in turn, creates a new type of experience, one in which he is indeed more loving, such that she can develop new predictions.’

It is not, as Pally describes it, the simple act of making something conscious that is therapeutically effective. What is necessary is too consciously recompose the story (something that is made possible by its being a story with a small number of variables) and, even more important, to see if the story generates a new ‘type of experience’ that in turn causes the development of ‘new predictions.’ The latter, I suggest, is an effect of the conscious on the unconscious, an alteration of the unconscious brought about by hearing, entertaining, and hence acting on a new story developed by the conscious. It is not ‘making things conscious’ that is therapeutically effective; it is the exchange of stories that encourages the creation of a new story in the unconscious.

For quite different reasons, Grey (1995) earlier made a suggestion not dissimilar to Pally's, proposing that consciousness was activated when an internal model detected a prediction failure, but acknowledged he could see no reason ‘why the brain should generate conscious experience of any kind at all.’ It seems to me that, despite her title, it is not the detection of prediction errors that is important in Pally's story. Instead, it is the detection of mismatches between two stories, one unconscious and the other conscious, and the resulting opportunity for both to shape a less trouble~making new story. That, in brief, is it to why the brain ‘should generate conscious experience,’ and reap the benefits of having a second story teller with which a different style of paraphrasing Descartes, one might know of another in what one might say ‘I am, and I can think, therefore I can change who I am.’ It is not only the neurobiological ‘conscious’ that can undergo change; it is the neurobiological ‘unconscious’ as well.

More generally, the most effective psychotherapy requires the recognitions that assume their responsibility is rapidly emerging from neuroscience/cognitive science, that the brain/mind has evolved with two (or more) independent story tellers and has done so precisely because there are advantages to having independent story tellers that generate and exchange different stories. The advantage is that each can learn from the other, and the mechanisms to convey the stories and forth and for each story teller to learn from the stories of the other are a part of our evolutionary endowment as well. The problems that bring patients into a therapist's office are problems in the breakdown of story exchange, for any of a variety of reasons, and the challenge for the therapist is to reinstate the confidence of each story teller in the value of the stories created by the other. Neither the conscious nor the unconscious is primary; they function best as an interdependent loop with each developing its own story facilitated by the semi~independent story of the other. In such an organization, there is not only no ‘real,’ and no primacy for consciousness, there is only the ongoing development and, ideally, effective sharing of different stories.

There are, in the story I am outlining, implications for neuroscience/cognitive science as well. The obvious key questions are what does one mean (in terms of neurons and neuronal assemblies) by ‘stories,’ and in what ways are their construction and representation different in unconscious and conscious neural processing. But even more important, if the story I have outlined makes sense, what are the neural mechanisms by which unconscious and conscious stories are exchanged and by which each kind of story impacts on the other? And why (again in neural terms) does the exchange sometimes break down and fail in a way that requires a psychotherapist~an additional story teller~to be repaired?

Just as the unconscious and the conscious are engaged in a process of evolving stories for separate reasons and using separate styles, so too have been and will continue to be neuroscience/cognitive science and psychotherapy. And it is valuable that both communities continue to do so. But there is every reason to believe that the different stories are indeed about the same thing, not only because of isomorphisms between the differing stories but equally because the stories of each can, if listened to, be demonstrably of value to the stories of the other. When breakdowns in story sharing occur, they require people in each community who are daring enough to listen and be affected by the stories of the other community. Pally has done us all a service as such a person. I hope my reactions to her article will help further to construct the bridge she has helped to lay, and that others will feel inclined to join in an act of collective story telling that has enormous intellectual potential and relates as well very directly to a serious social need in the mental health arena. Indeed, there are reasons to believe that an enhanced skill at hearing, respecting, and learning from differing stories about similar things would be useful in a wide array of contexts.

There is now a more satisfactory range of ideas available [in the field of consciousness studies] . . . They involve mostly quantum objects called Bose~Einstein condensates that may be capable of forming ephemeral but extended structures in the brain (Pessa). Marshall's original idea (based on the work of Frölich) was that the condensates that comprise the physical basis of mind, form from activity of vibrating molecules (dipoles) in nerve cell membranes. One of us (Clarke) has found theoretical evidence that the distribution of energy levels for such arrays of molecules prevents this happening in the way that Marshall first thought. However, the occurrence of similar condensates centring around the microtubules that are an important part of the structure of every cell, including nerve cells, remains a theoretical possibility (del Giudice et al.). Hameroff has pointed out that single~cell organisms such as 'paramecium' can perform quite complicated actions normally thought to need a brain. He suggests that their 'brain' be in their microtubules. Shape changes in the constituent proteins (tubulin) could subserve computational functions and would involve quantum phenomena of the sort envisaged by del Giudice. This raises the intriguing possibility that the most basic cognitive unit is provided, not by the nerve cell synapse as is usually supposed, but by the microtubular structure within cells. The underlying intuition is that the structures formed by Bose~Einstein condensates are the building Forms of mental life; in relation to perception they are models of the world, transforming a pleasant view, say, into a mental structure that represents some of the inherent qualities of that view.

We thought that, if there is anything to ideas of this sort, the quantum nature of awareness should be detectable experimentally. Holism and non~locality are features of the quantum world with no precise classical equivalents. The former presupposes that the interacting systems have to be considered as wholes~you cannot deal with one part in isolation from the rest. Non~locality means, among other things, that spatial separation between its parts does not alter the requirement to deal with an interacting system holistically. If we could detect these in relation to awareness, we would show that consciousness cannot be understood solely in terms of classical concepts.

Generative thought and words are the attempts to discover the relation between thought and speech at the earliest stages of phylogenetic and ontogenetic development. We found no specific interdependence between the genetic roots of thought and of word. It became plain that the inner relationship we were looking for was not a prerequisite for, but rather a product of, the historical development of human consciousness.

In animals, even in anthropoids whose speech is phonetically like human speech and whose intellect is akin to man’s, speech and thinking are not interrelated. A prelinguistic epoché through which times interval in thought and a preintellectual period in speech undoubtedly exist also in the development of the child. Thought and word are not connected by a primary bond. A connection originates, changes, and grows in the course of the evolution of thinking and speech.

It would be wrong, however, to regard thought and speech as two unrelated processes either parallel or crossing at certain points and mechanically influencing each other. The absence of a primary bond does not mean that a connection between them can be formed only in a mechanical way. The futility of most of the earlier investigations was largely due to the assumption that thought and word were isolated, independent elements, and verbal thought the fruit of their external union.

The method of analysis based on this conception was bound to fail. It sought to explain the properties of verbal thought by breaking it up into its component elements, thought and word, neither of which, taken separately, possessed the properties of the whole. This method is not true analysis helpful in solving concrete problems. It leads, rather, to generalisation. We compared it with the analysis of water into hydrogen and oxygen~which can result only in findings applicable to all water existing in nature, from the Pacific Ocean to a raindrop. Similarly, the statement that verbal thought is composed of intellectual processes and speech is functionally proper applications to all verbal thought and all its manifestations and explains none of the specific problems facing the student of verbal thought.

We tried a new approach to the subject and replaced analysis into elements by analysis into units, each of which retains in simple form all the properties of the whole. We found this unit of verbal thought in word meaning.

The meaning of a word represents such a close amalgam of thought and language that it is hard to tell whether it is a phenomenon of speech or a phenomenon of thought. A word without meaning is an empty sound; meaning, therefore, is a criterion of ‘word,’ its indispensable component. It would seem, then, that it may be regarded as a phenomenon of speech. But from the point of view of psychology, the meaning of every word is a generalisation or a concept. And since generalisations and concepts are undeniably acts of thought, but we may regard meaning as a phenomenon of thinking. It does not follow, however, that meaning formally belongs in two different spheres of psychic life. Word meaning is a phenomenon of thought only insofar as thought is embodied in speech, and of speech only insofar as speech is connected with thought and illumined by it. It is a phenomenon of verbal thought, or meaningful speech~a union of word and thought.

Our experimental investigations fully confirm this basic thesis. They not only proved that concrete study of the development of verbal thought is made possible by the use of word meaning as the analytical unit but they also led to a further thesis, which we consider the major result of our study and which issues directly from the further thesis that word meanings develop. This insight must replace the postulate of the immutability of word meanings.

From the point of view of the old schools of psychology, the bond between word and meaning is an associative bond, established through the repeated simultaneous perception of a certain sound and a certain object. A word calls to mind its content as the overcoat of a friend reminds us of that friend, or a house of its inhabitants. The association between word and meaning may grow stronger or weaker, be enriched by linkage with other objects of a similar kind, spread over a wider field, or become more limited, i.e., it may undergo quantitative and external changes, but it cannot change its psychological nature. To do that, it would have to cease being an association. From that point of view, any development in word meanings is inexplicable and impossible~an implication that impeded linguistics as well as psychology. Once having committed itself to the association theory, semantics persisted in treating word meaning as an association between a word’s sound and its content. All words, from the most concrete to the most abstract, appeared to be formed in the same manner in regard to meaning, and to contain nothing peculiar to speech as such; a word made us think of its meaning just as any object might remind us of another. It is hardly surprising that semantics did not even pose the larger question of the development of word meanings. Development was reduced to changes in the associative connections between single words and single objects: A word brawn to denote at first one object and then become associated with another, just as an overcoat, having changed owners, might remind us first of one person and later of another. Linguistics did not realize that in the historical evolution of language the very structure of meaning and its psychological nature also change. From primitive generalisations, verbal thought rises to the most abstract concepts. It is not merely the content of a word that changes, but the way in which reality is generalised and reflected in a word.

Equally inadequate is the association theory in explaining the development of word meanings in childhood. Here, too, it can account only for the pure external, quantitative changes in the bonds uniting word and meaning, for their enrichment and strengthening, but not for the fundamental structural and psychological changes that can and do occur in the development of language in children.

Oddly enough, the fact that associationism in general had been abandoned for some time did not seem to affect the interpretation of word and meaning. The Wuerzburg school, whose main object was to prove the impossibility of reducing thinking to a mere play of associations and to demonstrate the existence of specific laws governing the flow of thought, did not revise the association theory of word and meaning, or even recognise the need for such a revision. It freed thought from the fetters of sensation and imagery and from the laws of association, and turned it into a purely spiritual act. By so doing, it went back to the prescientific concepts of St. Augustine and Descartes and finally reached extreme subjective idealism. The psychology of thought was moving toward the ideas of Plato. Speech, at the same time, was left at the mercy of association. Even after the work of the Wuerzburg school, the connection between a word and its meaning was still considered a simple associative bond. The word was seen as the external concomitant of thought, its attire only, having no influence on its inner life. Thought and speech had never been as widely separated as during the Wuerzburg period. The overthrow of the association theory in the field of thought actually increased its sway in the field of speech.

No comments:

Post a Comment