1 Zolorr

Indeterminism Essay Definition

For a similar subject, see Indeterminacy (philosophy).

Indeterminism is the idea that events (certain events, or events of certain types) are not caused, or not caused deterministically.

It is the opposite of determinism and related to chance. It is highly relevant to the philosophical problem of free will, particularly in the form of metaphysical libertarianism. In science, most specifically quantum theory in physics, indeterminism is the belief that no event is certain and the entire outcome of anything is probabilistic. The Heisenberg uncertainty relations and the "Born rule", proposed by Max Born, are often starting points in support of the indeterministic nature of the universe.[1] Indeterminism is also asserted by Sir Arthur Eddington, and Murray Gell-Mann. Indeterminism has been promoted by the French biologist Jacques Monod's essay "Chance and Necessity". The physicist-chemist Ilya Prigogine argued for indeterminism in complex systems.

Necessary but insufficient causation[edit]

Further information: Necessary and sufficient conditions

Indeterminists do not have to deny that causes exist. Instead, they can maintain that the only causes that exist are of a type that do not constrain the future to a single course; for instance, they can maintain that only necessary and not sufficient causes exist. The necessary/sufficient distinction works as follows:

If x is a necessary cause of y; then the presence of y necessarily implies that x preceded it. The presence of x, however, does not imply that y will occur.

If x is a sufficient cause of y, then the presence of x necessarily implies the presence of y. (However, another cause z may alternatively cause y. Thus the presence of y does not imply the presence of x.)

As Daniel Dennett points out in Freedom Evolves, it is possible for everything to have a necessary cause, even while indeterminism holds and the future is open, because a necessary condition does not lead to a single inevitable effect. Thus "everything has a cause" is, in his opinion, not a clear statement of determinism. Still, a question might arise why this and not that effect occurred: as long as a cause (something in the past) determines the answer to the question "effect A or B" (or why A or B), determinism will hold. On this basis "everything has a cause" might still be understood as an expression of determinism.

Probabilistic causation[edit]

Main article: Probabilistic causation

Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer. As a result, many turn to a notion of probabilistic causation. Informally, A probabilistically causes B if A's occurrence increases the probability of B. This is sometimes interpreted to reflect the imperfect knowledge of a deterministic system but other times interpreted to mean that the causal system under study has an inherently indeterministic nature. (Propensity probability is an analogous idea, according to which probabilities have an objective existence and are not just limitations in a subject's knowledge).[2]

It can be proved that realizations of any probability distribution other than the uniform one are mathematically equal to applying a (deterministic) function (namely, an inverse distribution function) on a random variable following the latter (i.e. an "absolutely random" one[3]); the probabilities are contained in the deterministic element. A simple form of demonstrating it would be shooting randomly within a square and then (deterministically) interpreting a relatively large subsquare as the more probable outcome.

Intrinsic indeterminism versus unpredictability[edit]

A distinction is generally made between indeterminism and the mere inability to measure the variables (limits of precision). This is especially the case for physical indeterminism (as proposed by various interpretations of quantum mechanics). Yet some philosophers have argued that indeterminism and unpredictability are synonymous.[4]

Philosophy[edit]

One of the important philosophical implications of determinism is that, according to incompatibilists, it undermines many versions of free will. Correspondingly, believers in free will often appeal to physical indeterminism. (See compatibilism for a third option.)

Aristotle[edit]

The first major philosopher to argue convincingly for some indeterminism was probably Aristotle. He described four possible causes (material, efficient, formal, and final). Aristotle's word for these causes was αἰτίαι (aitiai, as in aetiology), which translates as causes in the sense of the multiple factors responsible for an event. Aristotle did not subscribe to the simplistic "every event has a (single) cause" idea that was to come later.

In his Physics and Metaphysics, Aristotle said there were accidents (συμβεβηκός, sumbebekos) caused by nothing but chance (τύχη, tukhe). He noted that he and the early physicists found no place for chance among their causes.

We have seen how far Aristotle distances himself from any view which makes chance a crucial factor in the general explanation of things. And he does so on conceptual grounds: chance events are, he thinks, by definition unusual and lacking certain explanatory features: as such they form the complement class to those things which can be given full natural explanations.[5]

— R.J. Hankinson, "Causes" in Blackwell Companion to Aristotle

Aristotle opposed his accidental chance to necessity:

Nor is there any definite cause for an accident, but only chance (τυχόν), namely an indefinite (ἀόριστον) cause. (Metaphysics, Book V, 1025a25)2a

It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not.[6]

Epicurus[edit]

One generation after Aristotle, Epicurus argued that as atoms moved through the void, there were occasions when they would "swerve" (clinamen) from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused. For Epicurus, the occasional interventions of arbitrary gods would be preferable to strict determinism.

Leucippus[edit]

The first concept of chance is found in the Atomism of Leucippus, often confused with that of Democritus, though, in fact, the last studies show many differences between the two. The first assertion about chance is the Leucippus fragment that says:

"ὁ τοίνυν κόσμος συνέστη περικεκλασμένῳ σχήματι ἐσχηματισμένος τὸν τρόπον τοῦτον. τῶν ἀτόμων σωμάτων ἀπρονόητον καὶ τυχαίαν ἐχόντων τὴν κίνησιν συνεχῶς τε καὶ τάχιστα κινουμένων"


"The cosmos, then, became like a spherical form in this way: the atoms being submitted to a casual and unpredictable movement, quickly and incessantly".[7]

Early modern philosophy[edit]

In 1729 theTestament of Jean Meslier states:

"The matter, by virtue of its own active force, moves and acts in blind manner".[8]

Soon after Julien Offroy de la Mettrie in his L'Homme Machine. (1748, anon.) wrote:

"Perhaps, the cause of man's existence is just in existence itself? Perhaps he is by chance thrown in some point of this terrestrial surface without any how and why".

In his Anti-Sénèque [Traité de la vie heureuse, par Sénèque, avec un Discours du traducteur sur le même sujet, 1750] we read:

"Then, the chance has thrown us in life".[9]

In the 19th century the French Philosopher Antoine-Augustin Cournot theorized chance in a new way, as series of not-linear causes. He wrote in Essai sur les fondements de nos connaissances (1851):

"It is not because of rarity that the chance is actual. On the contrary, it is because of chance they produce many possible others."[10]

Charles Peirce[edit]

Tychism (Greek: τύχη "chance") is a thesis proposed by the American philosopher Charles Sanders Peirce in the 1890s.[11] It holds that absolute chance, also called spontaneity, is a real factor operative in the universe. It may be considered both the direct opposite of Einstein's oft quoted dictum that: "God does not play dice with the universe" and an early philosophical anticipation of Werner Heisenberg's uncertainty principle.

Peirce does not, of course, assert that there is no law in the universe. On the contrary, he maintains that an absolutely chance world would be a contradiction and thus impossible. Complete lack of order is itself a sort of order. The position he advocates is rather that there are in the universe both regularities and irregularities.

Karl Popper comments[12] that Peirce's theory received little contemporary attention, and that other philosophers did not adopt indeterminism until the rise of quantum mechanics.

Arthur Holly Compton[edit]

In 1931, Arthur Holly Compton championed the idea of human freedom based on quantum indeterminacy and invented the notion of amplification of microscopic quantum events to bring chance into the macroscopic world. In his somewhat bizarre mechanism, he imagined sticks of dynamite attached to his amplifier, anticipating the Schrödinger's cat paradox.[13]

Reacting to criticisms that his ideas made chance the direct cause of our actions, Compton clarified the two-stage nature of his idea in an Atlantic Monthly article in 1955. First there is a range of random possible events, then one adds a determining factor in the act of choice.

A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law. It is the inner knowledge that he is in fact doing what he intends to do that tells the actor himself that he is free.[14]

Compton welcomed the rise of indeterminism in 20th century science, writing:

In my own thinking on this vital subject I am in a much more satisfied state of mind than I could have been at any earlier stage of science. If the statements of the laws of physics were assumed correct, one would have had to suppose (as did most philosophers) that the feeling of freedom is illusory, or if [free] choice were considered effective, that the laws of physics ... [were] unreliable. The dilemma has been an uncomfortable one.[15]

Karl Popper[edit]

In his essay Of Clouds and Cuckoos, included in his book Objective Knowledge, Popper contrasted "clouds", his metaphor for indeterministic systems, with "clocks", meaning deterministic ones. He sided with indeterminism, writing

I believe Peirce was right in holding that all clocks are clouds to some considerable degree — even the most precise of clocks. This, I think, is the most important inversion of the mistaken determinist view that all clouds are clocks[16]

Popper was also a promoter of propensity probability.

Robert Kane[edit]

Kane is one of the leading contemporary philosophers on free will.[17][18] Advocating what is termed within philosophical circles "libertarian freedom", Kane argues that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)".[19] It is important to note that the crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will. It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility".

What allows for ultimate responsibility of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs — those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refrainings in the life histories of agents that are required for UR. UR does not require that every act done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), he is responsible for the actions that are a result of his character.

Mark Balaguer[edit]

Mark Balaguer, in his book Free Will as an Open Scientific Problem[20] argues similarly to Kane. He believes that, conceptually, free will requires indeterminism, and the question of whether the brain behaves indeterministically is open to further empirical research. He has also written on this matter "A Scientifically Reputable Version of Indeterministic Libertarian Free Will".[21]

Science[edit]

See also: Philosophy of physics and indeterminism

Mathematics[edit]

In probability theory, a stochastic process, or sometimes random process, is the counterpart to a deterministic process (or deterministic system). Instead of dealing with only one possible reality of how the process might evolve over time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths may be more probable and others less so.

Classical and relativistic physics[edit]

The idea that Newtonian physics proved causal determinism was highly influential in the early modern period. "Thus physical determinism [..] became the ruling faith among enlightened men; and everybody who did not embrace this new faith was held to be an obscurantist and a reactionary".[22] However: "Newton himself may be counted among the few dissenters, for he regarded the solar system as imperfect, and consequently as likely to perish".[23]

Classical chaos is not usually considered an example of indeterminism, as it can occur in deterministic systems such as the three-body problem.

John Earman has argued that most physical theories are indeterministic.[24][25] For instance, Newtonian physics admits solutions where particles accelerate continuously, heading out towards infinity. By the time reversibility of the laws in question, particles could also head inwards, unprompted by any pre-existing state. He calls such hypothetical particles "space invaders".

John D. Norton has suggested another indeterministic scenario, known as Norton's Dome, where a particle is initially situated on the exact apex of a dome.[26]

Branching space-time is a theory uniting indeterminism and the special theory of relativity. The idea was originated by Nuel Belnap.[27] The equations of general relativity admit of both indeterministic and deterministic solutions.

Boltzmann[edit]

Ludwig Boltzmann, was one of the founders of statistical mechanics and the modern atomic theory of matter. He is remembered for his discovery that the second law of thermodynamics is a statistical law stemming from disorder. He also speculated that the ordered universe we see is only a small bubble in much larger sea of chaos. The Boltzmann brain is a similar idea. He can be considered one of few indeterminists to embrace pure chance.

Evolution and biology[edit]

Darwinian evolution has an enhanced reliance on the chance element of random mutation compared to the earlier evolutionary theory of Herbert Spencer. However, the question of whether evolution requires genuine ontological indeterminism is open to debate[28]

In the essay Chance and Necessity (1970) Jacques Monod rejected the role of final causation in biology, instead arguing that a mixture of efficient causation and "pure chance" lead to teleonomy, or merely apparent purposefulness.

The Japanese theoretical population geneticist Motoo Kimura emphasises the role of indeterminism in evolution. According to neutral theory of molecular evolution: "at the molecular level most evolutionary change is caused by random drift of gene mutants that are equivalent in the face of selection.[29]

Prigogine[edit]

In his 1997 book, The End of Certainty, Prigogine contends that determinism is no longer a viable scientific belief. "The more we know about our universe, the more difficult it becomes to believe in determinism." This is a major departure from the approach of Newton, Einstein and Schrödinger, all of whom expressed their theories in terms of deterministic equations. According to Prigogine, determinism loses its explanatory power in the face of irreversibility and instability.[30]

Prigogine traces the dispute over determinism back to Darwin, whose attempt to explain individual variability according to evolving populations inspired Ludwig Boltzmann to explain the behavior of gases in terms of populations of particles rather than individual particles.[31] This led to the field of statistical mechanics and the realization that gases undergo irreversible processes. In deterministic physics, all processes are time-reversible, meaning that they can proceed backward as well as forward through time. As Prigogine explains, determinism is fundamentally a denial of the arrow of time. With no arrow of time, there is no longer a privileged moment known as the "present," which follows a determined "past" and precedes an undetermined "future." All of time is simply given, with the future as determined or undetermined as the past. With irreversibility, the arrow of time is reintroduced to physics. Prigogine notes numerous examples of irreversibility, including diffusion, radioactive decay, solar radiation, weather and the emergence and evolution of life. Like weather systems, organisms are unstable systems existing far from thermodynamic equilibrium. Instability resists standard deterministic explanation. Instead, due to sensitivity to initial conditions, unstable systems can only be explained statistically, that is, in terms of probability.

Prigogine asserts that Newtonian physics has now been "extended" three times, first with the use of the wave function in quantum mechanics, then with the introduction of spacetime in general relativity and finally with the recognition of indeterminism in the study of unstable systems.

Quantum mechanics[edit]

Main article: quantum indeterminacy

At one time, it was assumed in the physical sciences that if the behavior observed in a system cannot be predicted, the problem is due to lack of fine-grained information, so that a sufficiently detailed investigation would eventually result in a deterministic theory ("If you knew exactly all the forces acting on the dice, you would be able to predict which number comes up").

However, the advent of quantum mechanics removed the underpinning from that approach, with the claim that (at least according to the Copenhagen interpretation) the most basic constituents of matter at times behave indeterministically. This comes from the collapse of the wave function, in which the state of a system upon measurement cannot in general be predicted. Quantum mechanics only predicts the probabilities of possible outcomes, which are given by the Born rule. Non-deterministic behavior upon wave function collapse is not only a feature of the Copenhagen interpretation, with its observer-dependence, but also of objective collapse theories.

Opponents of quantum indeterminism suggested that determinism could be restored by formulating a new theory in which additional information, so-called hidden variables ,[32] would allow definite outcomes to be determined. For instance, in 1935, Einstein, Podolsky and Rosen wrote a paper titled "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" arguing that such a theory was in fact necessary to preserve the principle of locality. In 1964, John S. Bell was able to define a theoretical test for these local hidden variable theories, which was reformulated as a workable experimental test through the work of Clauser, Horne, Shimony and Holt. The negative result of the 1980s tests by Alain Aspect ruled such theories out, provided certain assumptions about the experiment hold. Thus any interpretation of quantum mechanics, including deterministic reformulations, must either reject locality or reject counterfactual definiteness altogether. David Bohm's theory is the main example of a non-local deterministic quantum theory.

The many-worlds interpretation is said to be deterministic, but experimental results still cannot be predicted: experimenters do not know which 'world' they will end up in. Technically, counterfactual definiteness is lacking.

A notable consequence of quantum indeterminism is the Heisenberg uncertainty principle, which prevents the simultaneous accurate measurement of all a particle's properties.

Cosmology[edit]

Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflaton field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in". At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.

Neuroscience[edit]

Neuroscientists such as Bjoern Brembs and Christof Koch believe thermodynamically stochastic processes in the brain are the basis of free will, and that even very simple organisms such as flies have a form of free will.[33] Similar ideas are put forward by some philosophers such as Robert Kane.

Despite recognizing indeterminism to be a very low-level, necessary prerequisite, Bjoern Brembs says that it's not even close to being sufficient for addressing things like morality and responsibility.[33]Edward O. Wilson does not extrapolate from bugs to people,[34] and Corina E. Tarnita alerts against trying to draw parallels between people and insects, since human selflessness and cooperation, however, is of a different sort, also involving the interaction of culture and sentience, not just genetics and environment.[35]

Other views[edit]

Against Einstein and others who advocated determinism, indeterminism—as championed by the English astronomer Sir Arthur Eddington—says that a physical object has an ontologically undetermined component that is not due to the epistemological limitations of physicists' understanding. The uncertainty principle, then, would not necessarily be due to hidden variables but to an indeterminism in nature itself.[36]

Determinism and indeterminism are examined in Causality and Chance in Modern Physics by David Bohm. He speculates that, since determinism can emerge from underlying indeterminism (via the law of large numbers), and that indeterminism can emerge from determinism (for instance, from classical chaos), the universe could be conceived of as having alternating layers of causality and chaos.[37]

See also[edit]

References[edit]

  1. ^The Born rule itself does not imply whether the observed indeterminism is due to the object, to the measurement system, or both. The ensemble interpretation by Born does not require fundamental indeterminism and lack of causality.
  2. ^Stanford Encyclopedia of Philosophy: Interpretations of Philosophy
  3. ^The uniform distribution is the most "agnostic" distribution, representing lack of any information. Laplace in his theory of probability was apparently the first one to notice this. Currently, it can be shown using definitions of entropy.
  4. ^Popper, K (1972). Of Clouds and Clocks: an approach to the rationality and the freedom of man, included in Objective Knowledge. Oxford Clarendon Press. p. 220.  
  5. ^Hankinson, R.J. (2009). "Causes". Blackwell Companion to Aristotle. p. 223. 
  6. ^Aristotle, Metaphysics, Book VI, 1027a29
  7. ^H.Diels-W.KranzDie Fragmente der Vorsokratiker, Berlin Weidmann 1952, 24, I, 1
  8. ^Meslier, J. The Testament.
  9. ^Jde La Mettrie, J.O.:Anti-Sénèque
  10. ^Cournot, A.A: Essai sur les fondements de nos connaissances et sur les caractères de la critique philosophique, § 32.
  11. ^Peirce, C. S.: The Doctrine of Necessity Examined, The Monist, 1892
  12. ^Popper, K: Of Clouds and Cuckoos, included in Objective Knowledge, revised, 1978, p231.
  13. ^SCIENCE, 74, p. 1911, August 14, 1931.
  14. ^"Science and Man’s Freedom", in The Cosmos of Arthur Holly Compton, 1967, Knopf, p. 115
  15. ^Commpton, A.H. The Human Meaning of Science p. ix
  16. ^Popper, K: Of Clouds and Cuckoos, included in Objective Knowledge, revised, 1978, p215.
  17. ^Kane, R. (ed.) Oxford Handbook of Free Will
  18. ^Information Philosophers "Robert Kane is the acknowledged dean of the libertarian philosophers writing actively on the free will problem."
  19. ^Kane (ed.): Oxford Handbook of Free Will, p. 11.
  20. ^Notre Dame Reviews: Free Will as an Open Scientific Problem
  21. ^"Mark Balaguer: A Scientifically Reputable Version of Indeterministic Libertarian Free Will". turingc.blogspot.pt. 
  22. ^Popper, K: Of Clouds and Cuckoos, included in Objective Knowledge, revised, 1978, p212.
  23. ^Popper, 1978, citing, Henry Pemberton's A View of Sir Isaac Newton's Philosophy
  24. ^Earman, J. Determinism: What We Have Learned, and What We Still Don't Know
  25. ^The Stanford Encyclopedia of Philosophy: Causal Determinism
  26. ^Stanford Encyclopedia of Philosophy Causal Determinism
  27. ^Conference on Branching Space Time
  28. ^Millstein, R.L.: Is the Evolutionary Process Deterministic or Indeterministic
  29. ^Kimura, M. The neutral theory of molecular evolution, (The Science, No. 1, 1980, p. 34)
  30. ^End of Certainty by Ilya Prigogine pp. 162–85 Free Press; 1 edition (August 17, 1997) ISBN 978-0-684-83705-5[1]
  31. ^End of Certainty by Ilya Prigogine pp. 19–21 Free Press; 1 edition (August 17, 1997) ISBN 978-0-684-83705-5[2]
  32. ^Cosmos Magazine: How Much Free Will Do We Have
  33. ^ abBBC Science: Free Will Similar in Animals, Humans—But Not So Free
  34. ^"Is Homosexuality an Evolutionary Step Towards the Superorganism?". Wired. 2008-01-03. 
  35. ^"E.O. Wilson Proposes New Theory of Social Evolution". Wired. 2010-08-26. 
  36. ^de Koninck, Charles (2008). "The philosophy of Sir Arthur Eddington and The problem of indeterminism". The writings of Charles de Koninck. Notre Dame, Ind.: University of Notre Dame Press,. ISBN 978-0-268-02595-3. OCLC 615199716. 
  37. ^Bohm, D: Causality and Chance in Modern Physics, pp. 29–33

Bibliography[edit]

  • Schimbera, Jürgen; Schimbera, Peter (2010), Determination des Indeterminierten. Kritische Anmerkungen zur Determinismus- und Freiheitskontroverse (in German), Hamburg: Verlag Dr. Kovac, ISBN 978-3-8300-5099-5 
  • Lejeunne, Denis. 2012. The Radical Use of Chance in 20th Century Art, Rodopi. Amsterdam

External links[edit]

I. Determinism and Indeterminism in Philosophical Thought

1. Definition. "Determinism" is commonly understood as the thesis that «the laws which govern the universe (or a subsystem), together with the appropriate initial conditions, uniquely determine the entire time evolution of the universe (or subsystem)». "Indeterminism" is the negation of this thesis.

Only recently has this term entered common usage It dates back to 1927, in the same period in which the "uncertainty principle" was discovered by Heisenberg (1901-1976) and has been used throughout the 20th century by physicists and scientists. However, philosophical problems lie hidden in the term which, being of a philosophical nature, are much older and far-reaching than the current question as to the interpretation of quantum mechanics and physical theories: it involves the classical problems of necessity and contingency, of being and becoming, and of causality. As early as the 4th century B.C., Aristotle observed that "chance as well and spontaneity are reckoned among causes: many things are said both to be and to become as a result of chance and spontaneity" (Aristotle, Physics, II, 4, 195b). Moreover, "we observe that some things always happen in the same way, and others, for the most part. It is clearly of neither of these that chance is said to be the cause, nor can the 'effect of chance' be identified with any of the things that come to pass by necessity and always, or for the most part. But as there is a third class of events besides these two-events which all say are 'by chance' --it is plain that there is such a thing as chance and spontaneity; for we know that things of this kind are due to chance and that things due to chance are of this kind" (Aristotle, Physics, II, 5, 196b). After Aristotle, this problem continued to be one of the central themes of the history of philosophy and can be found in the works of practically every author. For a correct understanding of the relationship between determinism and indeterminism, it is therefore necessary to adopt an interdisciplinary approach which formulates and relates the relevant terms in question from the scientific, philosophical, and theological point of view.

2. Determinism and Liberty. In their quest for knowledge, from antiquity to the present era, human beings have taken two different paths of inquiry: a) one involves knowledge "external to the individual" (the world, nature, the cosmos). b) the other involves knowledge within the individual (thought, emotions, perception of freedom, self-consciousness, etc.).

Often, the first path to knowledge is called the "cosmological approach" and the second is known as the "anthropological approach". One may consider the problem of which path comes first and which really leads to the origin of knowledge. That is, one can study the question whether knowledge originates from sensible experience of the external world ( nihil est in intellectu quod prius non fuerit in sensu , as the the Aristotelian-Thomistic tradition holds), and only later, does the "self" have an internal experience of itself through the process of reflection; or, alternatively, knowledge originates from innate ideas (Plato, Descartes), from intuition (Bergson), or from divine illumination (Augustine, Bonaventure). Yet aside from the philosophical question as to which path is first and which is really related to the origin of knowledge, there remains the problem of the necessity and contingency of beings and events, of the chain of causes in the world external to us; a problem which has always involved that of the interior reality of human liberty. If a univocal (determined ad unum) effect follows necessarily from every cause, then the free choice of the will would be impossible because everything, including the choices one makes, would be already determined. And yet we experience our own free-will. This problem has appeared in the history of thought and involves explaining the internal perception of liberty in a way which is compatible with a correct philosophical and scientific description of the external world.

The result of this intellectual effort has been manifold. Some thinkers have preferred to deny the existence of liberty in favor of another datum of experience. This preference is most congenial to the rationalists. For example, Spinoza (1631-1677) went against the evidence of internal experience by declaring free will to be a pure illusion. Others have denied the causal relation between events ("principle of causality") and have maintained that cause-effect relations are none other than a simple operation performed by the human mind by habit, rather than a law written in the nature of things. This is the vision most congenial to the empiricists such as  David Hume (1711-1776). Other authors have sought a way to understand the coexistence of liberty and causality in order to be consistent with what is given through experience, without maintaining either of the two facts of experience, namely the existence of liberty and that of a causal order, be mere appearance. They however acknowledge that both liberty and causality have a full metaphysical meaning. This approach has brought with it an element of "chance" in addition to the notion of "cause". One can think of the clinamen of Epicurus (341-270 B.C.) defined as a sort of random and unpredictable deviation of atoms, entities already postulated by Democritus (460-460 B.C.), from their causally determined trajectories.

Several contemporary thinkers having a scientific background have proposed to found the possibility of free-will on the "uncertainty principle" of quantum mechanics. This type of approach turns out to be too simplistic. It is a kind of ontological transposition of a physical theory, since it reduces the metaphysical horizon to the horizon of "quantity" (even if understood in a very wide sense, as in the Aristotelian definition of quantity, which recalls the definition in modern topology) and of "relation," both being categories which constitute the basis of the quantitative sciences. In this perspective, the notion of causality turns out to be too restrictive, and univocally reduced to the simple "mechanical" (or at most "physical") interaction of the four fundamental forces known today. This radical metaphysical interpretation of a physical principle could be viewed as the flip side of determinism. In fact, such an interpretation would not be qualitatively different from determinism with its "univocal" conception of cause.

Several consequences follow from this way of framing the problem, consequences which are paradoxical from both the anthropological and theological point of view. The first consequence is that the "free" choice of will would turn out to be intrinsic to the behavior of the fundamental components of inanimate matter (quarks, elementary particles), which is identical to the matter in the human body. It would then follow that the electrons freely choose one among all the permitted states known to us on the basis of a probability theory, just as a man chooses freely from several possibilities placed before him. The second consequence is that «we would have to conclude that this inherent unpredictability also represents a limitation of the knowledge even an omniscient God could have» (A. Peacocke, God's Interaction with the World: The Implications of Deterministic Chaos and of Interconnected and Independent Complexity, in R. Russell et al., 1995, p. 279). This would be equivalent to stating that God could not know what traditional philosophy calls "future contingents". In the case in question, the future contingent is the evolution of the single particles which quantum mechanics cannot predict: "this limit on total predictability applies to God as well as to ourselves [...]. God, of course, knows maximally what is possible to know, namely the probabilities of the outcomes of these situations, the various possible trajectories of such systems" (ibidem , p. 281).

One may recall that as early as the Middle Ages, philosophers asked whether the divine intellect can know singulars, and in particular, singulars which are also future contingents. Different objections arose from various arguments. For example, some argued that singulars, knowable only in their material determination, were not suitable for a spiritual intellect; or, also, that it is possible that they do not exist and are therefore contingent; that they depend in turn on a free will other than the divine will; that there number is practically infinite, etc.  Thomas Aquinas would confront this theme in a more direct way showing that God knows singulars, infinite things, and future contingents (cf. Summa Contra Gentiles, I, cc. 65-69).

Such paradoxes can be removed at the root, as we shall see later, if one makes recourse to the "analogic" conception of causality, thereby overcoming the completely univocal conception in mechanics. It seems that the necessary physical basis of intellectual and voluntary activity is to be found not in the context of deterministic and indeterministic processes but rather in the complexity of organization of a highly evolved living being and its brain. In this sense, the Aristotelian-Thomistic vision and recent research on the subject of complexity and on the mind-body relationship seem to agree (cf. Basti, 1991).

In the vision of Aristotle --and later of Thomas Aquinas who continued and developed Aristotle's philosophy-- the question of causality is approached using the classical theory of the four causes: material, formal, efficient, and final. In order to let the theory of four causes be understood in a non-equivocal way (recalling that modern language uses words differently from the way they were used in their original context), we have to keep in mind other two metaphysical theories it presupposes, namely, the "hylemorphic" theory and the theory of "potency-act." We review briefly their essential features. a) The "material cause" is that which furnishes the constitutive basis of a material object, making it possible to be what it is with certain properties and not others, that is to receive a certain "form" (in the Aristotelian sense of the term); b) the "formal cause" is that which makes an object to receive a "form," i.e., the "nature" which characterizes it with its properties, and not another; it is that which makes it this object, and not something else; c) the "efficient cause" causes a physical object which is now characterized by a certain form and/or accidental characteristics to assume another form or other accidental characters (quantitative, qualitative, of position, etc.) and is therefore responsible for change and local motion (which is a type of change), and so on; d) the "final cause" lies in the final state reached at the end of a change. In this perspective, the final cause is the most important, since all other causes in some way depend on it. The final state to be reached determines the material constitution of an object and its essential characteristics (form). Moreover, it requires an adequate efficient cause to produce a change from a certain initial state to the final state.

With a similar conception of causality, the cause-effect relationship cannot be reduced to a simple mechanical, electromagnetic, or physical (in the modern sense) interaction. In the strong (metaphysical) sense, cause is rather that which causes a thing "to be" and causes it to be in a certain way, and not simply that which "moves" it locally. Causality is therefore conceived in an "analogic" sense. God, as First Cause can have among his effects the human being, or a being endowed with "free" will who is not univocally determined (cf. Summa Theologiae, I. q. 83, a. 1, ad 3um; De Veritate, q. 24, a. 1, ad 3um). In other words, according to an analogic, and therefore non-mechanistic, conception, there is room for (and the necessity of) a cause whose effects can be free acts of the will of a rational subject, like the human being, acts which God can know singularly and to which he confers being, so making possible any free human choice.

Now we need also understand the way in which free will can act when it uses matter, which is governed by physical laws. This kind of investigation, which directly involves psychology, cognitive science, physiology, and biology, is still to develop. It seems today that an interesting angle to approach this subject is through the sciences of complexity, since they aim to overcome reductionism, to understand scientific reason in a new fashion, and open it to the concept of analogy.

As John Polkinghorne has observed, "the causality which physics most readily describes is a bottom-up causality, generated by the energetic interaction of the constituent parts of a system. The experience of human agency seems totally different. It is the action of the whole person and so it would seem most appropriately to be described by a top-down causality, the influence of the whole bringing about coherent activity of the parts. May not similar forms of top-down causality be found elsewhere, including God's causal influence on the whole of creation?" (The Metaphysics of Divine Action, in R. Russell et al., 1995, p. 151).

II. Determinism and Indeterminism in the Sciences

In the context of the sciences, and especially in the physical and biological sciences, the question of determinism/indeterminism has been presented historically, in different ways, in the area of classical  mechanics (and more generally in physics) and in that of quantum mechanics.

1. Mechanistic Determinism. It is known that "classical mechanics" --that is, Newtonian together with Einsteinian (relativistic) mechanics-- predicts, on the basis of its laws, the possibility of determining in an exact way in every instant of time, future, or past, the position and the velocity of a particle, conceived as a material point, as long as one knows the force law acting on the point and the initial conditions, that is, the position and velocity of the particle in a particular instant of time. In this way, we are saying that classical mechanics is "deterministic." In this regard, one recalls Laplace's claim that it is in principle possible to know the future of the universe, as well as its past, if one knows the forces, the positions, and the velocities of all the particles of the universe in a certain instant of time. Then there only remains the technical difficulty that, in practice, the positions and velocities of all the particles of the universe cannot be known. And even if one did know this information, one would not be able to perform the great number of calculations needed to model the evolution of the universe.

2. Statistical Indeterminism. This final difficulty is already encountered when one seeks to control the behavior of molecules moving in a container of liquid or gas. What one can do, in this case, is to approach the problem statistically, that is, one can study the "average behavior" of the particles of the system. "Statistical mechanics" is the tool with which one can give exact information about the "probability" that a particle is found in a certain region and with a velocity in a certain range. Thus, in the context of a deterministic classical mechanics, a certain "uncertainty" arises in the knowledge of the positions and velocities of the single particles, an uncertainty of statistical nature due to the practical impossibility of a complete investigation and calculation. In this case, one speaks of "statistical indeterminism." What we succeed in determining, is only the probability that a particle is found in a certain region or has a certain velocity in a certain range. The uncertainty emerges on the macroscopic level, whereas on the microscopic it is absent. In other words, there is a deterministic mechanics underlying the statistical uncertainty. It must be made clear that this type of uncertainty is not inherent in the laws of classical mechanics, which are deterministic, but stems from the intrinsic limits that define what the observer can know. One could speak of a "subjective" uncertainty rather than an "objective" uncertainty.

3. Quantum Uncertainty . The Copenhagen interpretation of quantum mechanics states that uncertainty is not due to the practical impossibility of accessing all the information necessary to predict exactly the motion of the particles, but that uncertainty is a "law of nature", that is, it is a "theoretical" impossibility to be found precisely at the microscopic level of the system. According to this interpretation, quantum mechanics is not statistical - contrary to what  Einstein and the supporters of the "hidden-variable" theories hold. Uncertainty does not stem from ignorance, but from a theoretical impossibility. In this case, we have a kind of "indeterministic mechanics" at the basis of an uncertainty which is of non-statistical nature.

As Schrödinger observed in one of his essays in 1932, there was an attempt in the beginning to overcome "practical" determinism and only later did we came to admit that such determinism was only theoretical. The opinion previously held was: if we knew exactly the initial velocity and position of every molecule, and we had the time to keep track of all the collisions, it would be possible to predict exactly every thing that happens. Only the practical impossibility: a) of determining exactly the initial conditions of the molecules, and b) of keeping track of all single molecular events, has led to the introduction of "average laws," which were deemed satisfactory because they involved quantities which can really be observed with the senses, and because such laws do not have enough precision to allow us to make sufficiently certain predictions. Therefore, it was thought that phenomena were determined in a strictly causal way, as if the atoms and molecules were considered individually. This formed the foundation of statistical laws, which are the only ones accessible to experience. As pointed out by Schrödinger, the majority of physicists held that a strictly deterministic theoretical framework was indispensable for the description of the physical world. They were convinced that an indeterministic universe was not even "conceivable." They admitted that, at least in an elementary process such as the collision of two atoms, the "final result" is implicitly contained with complete certainty and full precision in the initial conditions. It used to be said in the past, and still is sometimes even today, that an exact natural science is not possible if based on premises other than these; and that without a strictly deterministic premise, science would be completely inconsistent. Our "image" of nature would degenerate into chaos and would therefore not correspond to a vision in which nature actually "exists", since, when all is said and done, nature is not complete chaos. All of this is undoubtedly "false." It is without doubt permitted to modify the picture in the kinetic theory of gases of what happens in nature: one may think that in the collision of two molecules, the trajectory is not determined by the "known laws of collisions," but by an appropriate "role of dice"(cf. Schrödinger, 1932).

It must however be made clear that in quantum mechanics there are still deterministic features, and that the wave function y evolves deterministically in time according to Schrödinger's equation. However, its physical meaning is indeterministic, since it contains none other than the probabilities of finding a system in a certain state, at least according to the Copenhagen interpretation.

Max Born's observation that one should not equate "causality" with "determinism," as in the mechanistic viewpoint, is still worthy of consideration. According to Max Born, it is not causality properly speaking that has to be eliminated, but only its traditional interpretation which equates it with determinism. He emphasizes that "the statement, frequently made, that modern physics has given up causality is entirely unfounded. Modern physics, it is true, has given up or modified many traditional ideas; but it would cease to be a science if it had given up the search for the causes of phenomena" (M. Born, Natural Philosophy of Cause and Chance [New York: Dover, 1964], pp. 3-4).

4. Determinism and Indeterminism in Non-Linear systems. Deterministic Chaos. A third situation in which uncertainty appears was already noted by Poincaré in 1890 in the field of "classical non-linear mechanics." However, it was set aside for a long while because of the "quantum mechanics boom", which supplanted classical mechanics, since the latter proved inadequate to explain the microscopic world. Only in the 1960s did were these studies resumed, which enjoyed a wide diffusion in the scientific and later popular literature.

It has been noted that the majority of differential equations that describe mechanical systems, even relatively simple ones, are "non-linear" equations, that is, equations in which the sum of two solutions is not a solution. For this class of equations, and in the majority of cases, the solutions are "unstable." This means that even a slight deviation in the initial conditions from the theoretically desired ones can lead, after a certain time, to an exponentially growing deviation from the theoretically predicted trajectory. Since we cannot know the initial conditions with infinite precision, we are unable to make reliable long-term predictions. In this situation, in the vicinity of the so-called "strange attractors," a dynamical behavior now known "deterministic chaos," comes to appear (cf. Gleick, 1989; Devaney, 1990). Only a minority of physical systems are stable, and therefore not chaotic. In these systems, the error in the initial conditions is bounded. In the presence of dissipation, the error even disappears while time increases, and the actual trajectory tends asymptotically to the theoretical one.

In the past we did not realize that solutions could be highly sensitive to the initial conditions because we had not yet discovered how to treat non-linear differential equations in a general fashion (cf. F.T. and I. Arecchi, 1990, pp. 23-24).

In this case, a mechanical theory governed by "deterministic" laws provides a "non-statistical uncertainty" (since it appears even in the presence of a single particle and does not require a great number of objects). Such uncertainty is a result of the high sensitivity to changes in the initial conditions. The uncertainty here is related to the intrinsic limits of a mathematical tool, such as the non-linear equations involved, and not a physical law, which would be the case for a law such as Heisenberg's uncertainty principle. Mathematics, in this case, proves to be limited in making physical predictions because it is not possible to know, either experimentally or theoretically, the initial conditions of the motion of a physical system with infinite precision. We have here an example in which nature cannot be completely described with a mathematical approach (unpredictability).

This situation gives rise to various questions in modern science. The first of these concerns the adequacy of mathematical theories to describe the natural world. A simple mechanical system consisting of three bodies is already mathematically unpredictable, and biological systems, having complex self-organizing structures, all the more. All of this seems to lead to one of the two following conclusions: either a) one should "widen" the scope and the methods of mathematical theories so that they can be applied to these new aspects of nature; or, b) one should adopt a scientific mind-set in which a mathematical description is not absolutely essential or exhaustive, while at the same time keeping a logical-demonstrative methodology. The first of the two alternatives has been adopted by various researchers in the field of mathematics and logic (cf. De Giorgi et al., 1995; Basti and Perrone, 1996) and developed along the lines of a modern formulation of the theory of analogy. The second alternative has been considered in biology, and to a certain extent, in chemistry.

III. Chance and Finality

What is chance? In scientific, philosophical, and common language, the term "chance" is used in opposition to the term "causal."

Following a known Thomistic classification, we call "random" ("by chance") any event which a) does not appear to have a controllable "direct cause" (per se ) and, as such, is unpredictable; b) appears without any goal or finality (cf. In I Sent. d. 39, q. 2, a. 2, ad 2um). We will look briefly at these two characteristics and later add some observations concerning the metaphysical and theological aspects of the problem.

1. Absence of a Direct Cause. A truly random event consists of two independent and accidentally concomitant events in which each event is the effect of a proper direct cause without there being a direct cause of the concomitance itself of the two events. To use a common everyday example, the fact that two friends meet in a square and take different streets, without arranging an appointment beforehand is "random" (cf. Aristotle, Physics, II, 4, 196a). Certainly, there is a cause to the fact that each of the two left home and proceeded to the square at that moment, but there is no direct cause of their meeting each other. One could say, at least, that there is no cause on the "same level" as that of leaving home: there could have been, for example, a third person who called them up independently to go to the square so that the two friend would meet as a surprise... In this case, we are dealing with a "second-level" cause which makes use of the "first-level" causes (which are, in this case, the free decision of each of the two friends to go out into the square).

In the course of the history of science, two situations which systematically appear before the researcher have been experimentally observed: the first consists in the constant association of two phenomena in which if one occurs, the other always occurs (and not vice-versa) whereby one recognizes the first as the "cause" of the other; the second consists in the existence of two phenomena which seem to occur together without any clear direct cause linking them. The latter are considered "random." If in the sciences it appears that determinism is closely related to a causal description of observed phenomena, uncertainty in its various forms always introduces an uncontrollable element whose origin is considered by the scientist to be random. However, explaining randomness as a manifestation of the limits on what the observer can practically know is quite different from saying that, theoretically, the reason for randomness depends on the nature of things. Philosophically speaking, "randomness," in the strong sense of the word, is only that which could derive from a reason of the theoretical kind and not from our ignorance.

2. Absence of a Finalism. The absence of a direct cause in the random event is related to the fact that the event has no purpose: what happens by chance is, by definition, "aimless."

The problem of the relationship between causality, chance, and finalism has played an important role in contemporary debates on scientific method, such as, for example, the discussion found in the book of J. Monod, Chance and Necessity (1970): yet finality, which is systematically excluded from the physical sciences, has emerged in many cases as an adequate explanatory principle. For example, in the field of cosmology, it appears in the debate over the Anthropic Principle. In biology, it appears in the concept of "teleonomy", which overturns the notion of "initial conditions", typical of physics, replacing it with that of "final conditions." Such final conditions must be realized by a system according to a predetermined "program," as in biology happens considering the DNA genetic code (cf. Cini, 1994, p. 236).

The possibility of interchange between initial and final conditions has always been present, in principle, also in classical mechanics because the theory of differential equations does not specify the time in which such conditions are imposed. However, in a dissipative system with a stable attractor, the symmetry between initial and final conditions is broken in favor of the final conditions. In this case, whatever the initial conditions may be, as long as they start from some inside the basin of attraction, the evolution of the system tends to stabilize asymptotically into the attractor, which turns out to be the final state of the system. The most familiar example is given by the pendulum moving in a medium with friction, which tends towards the position of stable equilibrium regardless of its initial position and velocity; another example is that of the driven LCR circuit, which after a certain time reaches a stable solution after the energy associated with the "transient solution" is dissipated.

3. God and Chance. In the metaphysical-theological perspective, problems related to determinism and indeterminism - as well as the question of free will alluded to earlier (see above, I.2) - lead to questions regarding the modes of divine action within the world. How may God or, in properly theological language, Divine Providence, act on the world, if we admit "random" events, that is, events without a finality, having no direct (per se ) causes, and not simply events whose finalism we do not know?

If one does not admit the analogy of causality, that is, the existence of levels and differentiated modes according to which causality can and must operate and be understood, and if one, instead, supposes that the only possible mode of causality is the physical-mechanical mode, then one is led to attribute to "chance" the same character as an efficient cause from which random events should spring. Since chance has no direct cause in the physical-mechanical sense, then it would assume, in substance, the role of the "first cause," and then everything would spring from chance. This position can be found, for instance, in Jacques Monod, who states that "chance alone is at the source of every innovation, of all creation in the biosphere. Pure chance, absolutely free but blind, at the very root of the stupendous edifice of evolution" (Chance and Necessity [London: Collins, 1972], p. 110). But this is contradictory, because of the very definition of "chance." In fact such a definition assumes the existence of other (indirect) causes which precede chance and whose effects are accidentally concomitant. Therefore, chance cannot take the place of the first cause since it requires the existence of previous causes in order that their effects may be accidentally concomitant; then it results that chance needs the existence of first cause, instead of eliminating it.

An intermediate solution, which is certainly interesting, was proposed by Arthur Peacocke. Opposing Monod's thesis, he acknowledges the existence of causality, and therefore of a first cause, but requires that the first cause be "self-limiting" so as to leave room for chance. The causal action of God differs from the purely physical action because it is "informative," just as an immaterial action which interacts with the world as a whole and lets the laws of complexity govern the single events. God, acting as an "informer" over the "world as a whole," does not have anything to do with singular events, and therefore cannot know the "future contingents" determined by the complexity of physical and biological systems (cf. God's Interaction with the World: The Implications of Deterministic Chaos and of Interconnected and Independent Complexity, in Russell et al., 1995).

This approach, which draws its inspiration from complex systems and information theory, succeeds in introducing a certain diversification in the modes of causality ("physical" action and "informative" action), but seems to have the same limitation of conceiving divine causality and chance as if they were two competitors who need to divide the field of action. Such a conception does not use the analogy of divine causality, which would allow chance to play a role without diminishing the role of the first cause. But even in this perspective, the contradiction is not resolved: in fact, according to its very definition, chance subsists only if there are causes which precede it, producing, randomly, concomitant effects. It is hard to understand how a single random event can exist independently of the "first cause."

The fact that certain concomitances happen by chance, that is, without a "direct" or "secondary cause," to use a philosophical term, does not mean that they do not have a cause in the absolute sense, even if considered separately: it is necessary to keep in mind the hierarchy of levels of causality. Metaphysically speaking, all that exists is caused and maintained in its being by the first cause (God), who is also the final end of all things. And the "first cause" acts through a chain of "secondary causes", up to those causes which are nearest to the single object under observation and act directly upon it. Thus, even events which are random due to their lack of a direct cause, have a cause on a higher level in the chain of causes. And this is where the divine action lies, which even through the happening of random events orients all things to the final end. Thomas Aquinas has adequately treated the problem of the relationship of chance to the existence of Divine Providence. He showed that Providence does not exclude contingency and does not deprive things of fortune and chance. Providence extends over to singular contingents and can, in principle, act either directly or through second causes (cf. Summa Contra Gentiles, book III, chps. 72-77). An interesting way of summarizing the relationship between chance and God was proposed by D. Bartholomew, whose approach is closer to that of St. Thomas. Speaking of "God of chance," he maintains that chance is a deliberate, if not necessary, part of God's creation.

Chance is not something which escapes God's control, nor something which opposes Him or contains within itself its final explanation: "If chance cannot be explained, the life of individuals would be submerged in disorder. On the other hand, if one admits that there can be a universal Cause of the world, this Cause must be responsible for everything that exists, including chance. 'And thus it turns out that everything that happens, if referred to the First Divine Cause, is ordered and does not exist accidentally, even if some can be called per accidens in their relation to other causes' (Thomas Aquinas, In VI Metaph., lect. 3)" (Sanguineti, 1986, p. 239). We may observe, finally, that the same idea is found in the Catechism of the Catholic Church, expressed in a more theological language: "God is the sovereign master of his plan. But to carry it out he also makes use of his creatures' cooperation. This use is not a sign of weakness, but rather a token of almighty God's greatness and goodness. For God grants his creatures not only their existence, but also the dignity of acting on their own, of being causes and principles for each other, and thus of cooperating in the accomplishment of his plan" (CCC 306).

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *