Cato White Papers and Miscellaneous Reports

Hayek's Evolutionary Epistemology, Artificial Intelligence, and the Question of Free Will

Gary T. Dempsey

Gary T. Dempsey is assistant director of development at the Cato Institute.

This article originally appeared in Evolution and Cognition, published by the Konrad Lorenz Institute in Vienna, Austria.

Abstract

This paper examines the evolutionary epistemology of the Austrian economist Friedrich Hayek. I argue that Hayek embraces a connectionist theory of mind that exhibits the trial-and-error strategy increasingly employed by many artificial intelligence researchers. I also maintain that Hayek recognizes that his epistemology undermines the idea of free will because it implies that the mind's operation is determined by the evolutionary interaction of the matter that comprises ourselves and the world around us. I point out, however, that Hayek responds to this implied determinism by explaining that it can have no practical impact on our day-to-day lives because, as he demonstrates, the complexity of the mind's evolution prevents us from ever knowing how we are determined to behave. Instead, we can only know our mind at the instant we experience it.

Key words: Connectionism, complex adaptive system, long-term potentiation, nonmonotonic reasoning, physiological memory, self-organizing maps, spontaneous order.

In the field of economics, Friedrich Hayek (1899-1992) has been long recognized for his contributions to the discipline, and in fact was awarded the Nobel Prize in 1974 for his pioneering work in the theory of money and economic fluctuations. But it should be noted that Hayek's original scholarly interest was in the natural sciences, not economics, and that it was in the area of theoretical psychology that he first raised the issue of self ordering in complex systems. Indeed, in the winter of 1920, one year before going to work with his eventual mentor, economist Ludwig von Mises, Hayek wrote a manuscript that asked 'how can order create itself within our neural fibers?' 1 That manuscript would be supplemented and was eventually published in 1952 under the title The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology.

Throughout The Sensory Order and later writings, Hayek makes it clear that the apparatus that allows us to know the world—the mind—is itself subject to evolution; that is, it is a 'work in progress' prone to modification by experience. The mind, he explains, is “incessantly changing” (1984, p243) and its contents constitute an adaptive “capacity to respond to [its] environment with a pattern of actions that helps [it] to persist” (1973, p18). This view puts Hayek squarely in the camp of the evolutionary epistemologists. Indeed, like Donald Campbell (1960, 1974), Karl Popper (1972, 1984, 1987), Konrad Lorenz (1977, 1982), and other expositors of the evolutionary model of human knowledge, Hayek maintains that knowledge is the product of trial-and-error learning and that our minds are characterized by gains in adaptive advantage due to the selective retention of useful representations of the physical world.

It is not surprising, then, that scholars have concluded that Hayek's epistemology has an essentially “evolutionary character” (Kukathas 1989, p49), or that Hayek takes the “evolutionist standpoint” (Gray 1986, p11) or the “evolutionary perspective” (Vanberg 1994, p96) in his epistemology. In the following essay, I attempt to piece together Hayek's epistemology and to explore the particulars of its evolutionary quality. I begin with a brief overview of Hayek's connectionist theory of mind, followed by an account of his evolutionary epistemology. This latter section consolidates Hayek's diverse thoughts on evolution to bring them into sharper focus for our discussion and shows how his view anticipates a number of conceptual developments in artificial intelligence. Finally, I explain how Hayek's epistemology undermines the idea of free will and how he responds to this claim.

The Connectionist Mind

At bottom, Hayek is a materialist. For him, there is no mind-body split. Instead, all our thoughts, memories, and ambitions result from the operation of matter. Indeed, for Hayek, “the assertion that...mental phenomena are 'nothing but' certain complexes of physical events [is] probably defensible” (1989, p88). Or more assertively, “mind is...the order prevailing in a particular part of the physical universe—that part of it which is ourselves” (1952, p178).

Hayek's materialism begins with the recognition that the locus of the mind—the human brain—is made up of a vast weave of fibrous cells called neurons; the cerebral cortex being the most dense with more than ten-thousand million. Each of these neurons, in turn, can be functionally connected to neighboring neurons via junctions called synapses; the potential number and complexity of connective patterns that can be built up between them is therefore practically unlimited. It is out of this universe of possibility, says Hayek, that the order we call 'the mind' emerges.

With respect to the formation of the mind, Hayek contends that the sensory experiences the brain processes are not unitary, but entail a collection of impulses. That is to say, like a suitcase filled with an assortment of shirts, pants, shoes, belts, socks, etc., sensory experiences are made up of many impulses corresponding to various aspects of the observed object or event. What is more, these impulses emanate not from one, but from many neighboring receptors in the sensory organ, and they occur in conjunction with still other impulses associated with participation in a specific kinesthetic activitysuch as touching, looking, or listening. This package of impulses then courses through our nervous system and, through what Hayek calls “physiological memory” (ibid., p53), forges connective pathways or “links” (ibid., p104) between neurons. Such connections are formed, says Hayek, because the electrochemical impulses triggered by sensory stimuli change the “threshold of excitation” (1978c, p40) of affected neurons so that future impulses are 'positively weighted' or flow more easily through those already in “a state of preparedness to act” (ibid.). This view is not without some basis in modern neuroscience. Neuroscientists maintain that sensory experiences, especially recurrent or traumatic experiences, generate connections between neurons. What occurs is a physiological process called long-term potentiation, or LTP (Baudry and Davis, 1996). The LTP process involves changing the efficiency of synaptic transmissions along pathways that connect neurons—in other words, certain electrochemical signals travel more easily along LTP pathways. According to this theory, the connective pathways between neurons possess a class of postsynaptic amino acid receptors known as NMDAs. NMDA receptors are activated each time the pathway is confronted with an electrochemical impulse so that the receptivity of neurons with worn NMDAs is enhanced over time.

As there are a multiplicity of impulses associated with each sensory experience we encounter, impulses from different sensory experiences may employ one or more of the same neural pathways. There will occur, in other words, an overlapping of physiological memory. This overlapping is perhaps the single most important concept of Hayek's theory of the mind for it leads to what he calls “simultaneous classification” (1952, pp180-181). Simultaneous classification is the idea that sensory experiences are at the same time related to all sorts of other sensory experiences via shared neural pathways. These shared pathways have the effect of grouping together or categorizing sensory experiences along the lines of a neural commonalty. Returning to our suitcase metaphor for sensory experiences, it would be as if one containing shirts, pants, and shoes was linked to all the others with shirts, and at the same time linked to the ones with pants and, still further, linked to all the ones with shoes. The concept of simultaneous classification, in other words, means that at any given moment a sensory experience will be a member of more than one class of events, related through physiological memory to many other sensory experiences.

Hayek also contends that there are connections that 'negatively weight' or inhibit the flow of impulses (1952, pp67-68). This not only defines a second way that the brain's electrochemical impulses are channeled, but compounds the complexity of neural patterns by introducing the possibility that different impulses can create connections that oppose or counteract each other.

Broadly speaking, this account of brain functioning conforms to what is called a connectionist model. This is a term that Barry Smith (1996) also applies to Hayek's theory of mind. What I (and Smith) mean to suggest by using this term is not only that neural connections form the focus of Hayek's model generally, but that there is an affinity between Hayek's views and those of the research field pioneered by Warren McCulloch and Walter Pitts (1943) that today encompasses artificial neural networks. As a historical matter, moreover, it should be recalled that one of the first connectionist computer models of the brain, Frank Rosenblatt's Perceptron, was directly inspired by the writings of Hayek and psychobiologist Donald O. Hebb (Rosenblatt 1958).

Under Hayek's connectionist model of brain functioning, the “possibilities of classification of...different individual impulses and groups of impulses...are practically unlimited [and] adequate for building up an extremely complex system of relations among millions of impulses” (ibid., p71). As Hayek sees it, initial sensory impulses destined for the brain “pass in a great variety of directions...merely diffus[ing] and dissipat[ing] themselves in our neural fibers” (ibid., p120). An afferent impulse arriving for the first time, in other words, will “not yet occupy a definite position in the order of such impulses,” or have a “distinct functional significance” (ibid., p103). “But since every occurrence of a combination of such impulses will contribute to the gradual formation of a network of connexions of ever-increasing density, every neuron will gradually acquire a more and more clearly defined place in the comprehensive system of such connexions” (ibid., p103). It is out of this 'thickness' of connections that we are able to detect patterns and come to know the world. In fact, says Hayek, one of the things that distinguishes an adult mind from an infant mind is that an infant has a “much thinner net of ordering relations” (1978c, p44). Thus our experience is “richer than theirs as a consequence of our mind being equipped, not with relations which are more abstract, but with a greater number of abstract relations” (1989, p66).

This intimate relationship between neural connections and sensory experiences leads Hayek to what is called the correspondence theory of perception—the idea that the physical workings of the brain come to map things out in the world. As Hayek puts it, the “mental order involves...a gradual approximation to the order which in the external world exists between the stimuli evoking the impulses which 'represent' them in the central nervous system” (1952, p107). Hayek, however, is careful to point out that our representations are not in some manner originally attached to, or an original attribute of, the individual physiological impulses or stimuli. Rather, the process of physiological memory creates the distinctions in question. Indeed, the representations are “determined by the system of connections by which the impulses can be transmitted from neuron to neuron” (ibid., p53). This may also be expressed as the specific character of a particular representation is “neither due to the attributes of the stimulus which caused it, nor to the attributes of the impulse, but [is] determined by the position in the structure of the nervous system of the fiber which carries the impulse” (ibid., p12). In other words, a given sensory impulse does not in and of itself designate specific mental representations. Rather, a mental representation is designated by the order of all the connections established in the mind.

Hayek's connectionism, therefore, leads him to assert that there is no basis to believe that the representation of physical reality that the mind makes possible is a complete representation of the world Ding an sich. Rather, each mind functions through a recognition of what is similar to that mind at the expense of what is particular to an item. “What we perceive of the external world,” explains Hayek,

are never all of the properties which a particular object can be said to possess objectively, not even only some of the properties which these objects do in fact possess physically, but always only certain 'aspects,' relations to other kinds of objects which we assign to all elements of the classes in which we place the perceived objects. This may often comprise relations which objectively do not at all belong to the particular object but which we merely ascribe to it as a member of the class in which we place it as a result of some accidental collection of circumstances from the past (ibid., p143).

In other words, Hayek's connectionist mind is not a strict catalogue of empirical data, but an extracted collection of similarities or analogies. As Anna Galeotti correctly summarizes his view, the mind does not know specific things, but kinds (1987, p170).2

But how is it that sensory impulses come to contribute to the gradual formation of an order of connections, especially one that is capable of distinguishing things in life's storm of sensory events? Or, as Hayek puts it, “the question which thus arises for us is how it is possible to construct from the known elements of the neural system a structure which would be capable of performing such discrimination in its responses to stimuli as we know our mind in fact to perform” (1952, p47). According to Hayek, the answer to this question has to do with the process of evolution. This is a predictable starting point for Hayek given that he asserts that “wherever we look, we discover evolutionary processes leading to...increasing complexity,” and moreover, “we understand now that all enduring structures above the level of the simplest atoms, and up to the brain and society, are the results of, and can be explained only in terms of, processes of selective evolution” (1989, p92).

The Evolutionary Mind

Once a 'thick' net of ordering connections is established in the mind, says Hayek, a range of possible neural routing patterns is engendered. Simultaneous classification, in other words, results in “a process of channeling, or switching, or 'gating' of the nervous impulses” (1967, p51). Yet Hayek is emphatic that this 'lock-and-dam' system of neural connections does not in and of itself specify the neural routing patterns that will be employed by the mind. Instead, neural connections constitute “dispositions” (1978c, 40) and only through competition among many different neural dispositions and combinations of dispositions will distinctly functional patterns be discovered. Hayek thus embraces the view that the physiological apparatus that enables us to know the world is itself subject to the pressures of the natural selection process. This view should not sound unusual to readers aquainted with the writings of nueroscientist William H. Calvin (1987, 1996b) and Nobel-laureate neurobiologist Gerald M. Edelman (1982, 1987), and it bears special noting that Edelman and Hayek were familiar with each other's work. In fact, Edelman cites Hayek in his book Neural Darwinism, 3 and Hayek cites Edelman in his book The Fatal Conceit.

In Hayek's evolutionary model, the brain “first develops new potentialities for actions and that only afterwards does experience select...those which are useful as adaptations to typical characteristics of its environment” (ibid., p42). In other words, the mind “simultaneously plays with a great many action patterns of which some are confirmed and retained as conducive to [its] preservation” (ibid., p43). The neural patterns produced in the structure of the nervous system “will first appear experimentally and then either be retained or abandoned” (ibid.). Since the “chance of persistence” of the mind is evidently increased if it possesses the capacity of “retaining a 'memory' of the connexions between events” that are capable of “correct[ly] anticipat[ing] future events” (1952, p129), there will emerge from natural selection among the brain's changing repertoire of neural dispositions and combinations of dispositions, patterns that conform to the requirements of survival.

This evolutionary model of the mind's operation is analogous to the one employed in Oliver Selfridge's artificial intelligence computer program, Pandemonium, or 'many demons.' That program contained numerous semi-independent sub-programs, or demons. When problems were encountered, all the demons would compete, and after a brief struggle, the winner would get to try to solve the problem. If it failed, others would try until a demon was found that allowed the overall program to continue to operate. Later Pandemonium-type programs involved random connections between demons so that they could build on each other and experiment with more sophisticated solutions. The longer the demons continued to function, the stronger the bond or connective confidence between them would grow. According to Daniel Dennett (1995), director of the Center for Cognitive Science at Tufts University, this selection process may appear disorganized and inefficient with “all these different demons working on their own little projects...building things and then...tear[ing] them apart (ibid., p183). But, “its also a great way of getting something really good built—to have lots of building going on in a semicontrolled way and then have a competition to see which one makes it through to the finals” (ibid.).

Yet we should realize that such a process of natural selection is useful insofar as the relevant circumstances for survival are unknown. Indeed, it would be pointless to employ the natural selection process if it could be determined beforehand what a successful outcome would be. In this sense, natural selection is practical precisely because viable results cannot be precalculated. Instead, they are “discovered” (1984a, p255) through a trial-and-error procedure whereby unsuccessful solutions (or neural dispositions) are eliminated. What remains after the procedure, according to Hayek, is a form of “knowledge” (1984a, p257), a kind of residue of information on how to survive. This “knowledge,” however, is of a negative or Popperian (1963) sort; that is, learning to meet the requirements of survival through natural selection does not consist of “verifying” solutions, but of “falsifying” unfit alternatives.

It is also important to remember that the “knowledge” generated by this natural selection process cannot be called intentional. Although it may be highly conducive to survival, it does not itself have that aim. Rather, it is passively acquired through the ordeal of trial and error. It is on this point that Hayek's epistemology most clearly mirrors Samuel Pufendorf's algedonic notion of “implicit obligation” (Buckle 1991, pp63-64). According to Pufendorf, an early natural rights theorist, we can speak of two kinds of obligation: “explicit obligation,” where the need for an action is secured through its self-evident beneficence, and “implicit obligation,” where the need for an action is not apparent, but secured through the guiding pressures of harmful actions. With respect to Hayek's theory of the mind, this means that since it is constantly exposed to newly arriving sensory signals and its “persistence...will...be increased if it...happens to respond appropriately to harmful and beneficial influences” (1952, p129), its “actions will appear self-adaptive and purposive” (ibid., p122). The crucial point here, however, is that the “knowledge” that enables the mind to persist is “not built up by [itself], but that it is by a selection among mechanisms producing different patterns that the system...is built up” (1978c, pp42-43). In other words, says Hayek, the mind's evolution is “blind” (1988, p15). It depends not upon premeditated objectives or foresight, but upon a “process of exploration” (1984a, p263) or a “discovery procedure” (ibid., p255) that gropes through the space of what is possible and happens upon routines that fit the requirements of survival. Thus understood, the natural selection process is a nonteleological explanation of the mind's manifestly practical achievements.

The recognition that each person's mind is made up of a blind accumulation of useful responses to the demands of survival, moreover, is critical to Hayek's evolutionary epistemology; it allows him to dispense with three properties he has argued are commonly misattributed to the evolutionary process: optimalism, progressivism, and sequentialism.

Optimalism

Hayek points out that although blind selection makes possible the learning necessary for survival, it does not mean that permanent or absolute solutions will be discovered. It simply means that of the solutions available at a specific moment in time, the instrumental one(s) will survive, and at the same time, that any solution may diminish survivability in another context or future scenario. In other words, it is a fundamental mistake to view natural selection as a discovery process that necessarily supplies the optimal answers through time. Any evolutionary adaptation selects for some particular attribute over others, thus entailing a trade-off or opportunity cost. They are trade-offs because they close-off future lines of development and lock-in traits that may be maladaptive in future environments. Accordingly, it would be wrong to conclude, starting from evolutionary premises, that whatever solution has evolved is “always or necessarily conducive to the survival or increase in the populations following them” (1988, p20).

Progressivism

Hayek maintains that we cannot describe evolution as a phenomenon forever progressing forward and upward. Indeed, 'progress' in evolutionary terms merely means adaptation to a changing environmental context and what that entails—success and failure, persistence and elimination. That solutions become more complex and better adjusted to generate survival, explains Hayek, happens not because they are approaching a superior end state, but because those prospered that happened to change in ways that made them increasingly adaptive. Indeed, “all evolution...is a process of continuous adaptation to unforeseeable events, to contingent circumstances which could not have been forecast” (ibid., p25). As in the case of biological evolution, natural selection “describes a kind of process (or mechanism) which is independent of the particular circumstances in which it has taken place on Earth, which is equally applicable to a course of events in very different circumstances, and which might result in the production of an entirely different set of organisms” (1967c, p32). Thus, “evolution [is] not linear, but result[s] from continual trial and error, constant 'experimentation' in arenas wherein different orders contend” (1988, p20).

Sequentialism

Hayek rejects the idea that evolution must follow a set sequence of phases. Indeed, “although...the original meaning of the term 'evolution' refers to such an 'unwinding' of potentialities already contained in the germ, the process by which the biological...theory of evolution accounts for the appearance of different complex structures does not imply such a succession of particular steps” (1973, p24). That is to say, the concept of evolution does not denote “necessary sequences of predetermined 'stages' or 'phases,' through which the development of an organism...must pass” (ibid.), and evolution does not know anything like “'laws of evolution' or 'inevitable laws of historical development' in the sense of laws governing necessary stages or phases through which the products of evolution must pass” (1988, p26). “One of the main sources of this misunderstanding,” says Hayek, “results from confusing two wholly different processes which biologists distinguish as ontogenetic and phylogenetic” (ibid.). Ontogenetic processes have to do with, among other things, the genetic development of an individual thing, something set by the inherent mechanisms built into it, such as the way DNA determines our physical make-up. By contrast, phylogenetic processes deal with the evolutionary movements of whole populations, such as the way the giraffe species acquired long necks as a result of an extended shortage in ground vegetation. Non-biologists tend to make the error that phylogenetic processes operate in the same closed way as ontogenetic processes often do. They do not. Phylogenetic processes are not preprogrammed, but dependent upon the changing requirements of survival. Indeed, phylogenetic processes are conditional; that is, they are reflective of the demands of the environment at that point in time.

The most significant implication of the blindness of natural selection, however, is that evolutionary success is based on chance, not foredesign. On this point, Hayek explicitly recognizes the practical value of error toleration. Indeed, as has been pointed out by complexity theorists Gregoire Nicolis and Ilya Prigogine (1989) in their discussion of the adaptability of ant colonies: “A permanent structure in an unpredictable environment may well compromise the ability of the colony and bring it to a suboptimal regime. A possible reaction toward such an environment is thus to maintain a high rate of exploration and the ability to rapidly develop temporary structures suitable for taking advantage of any favorable occasion that might arise. In other words, it would seem that randomness presents an adaptive value in the organization of the society” (ibid., p293). Similarly, for Hayek, random “mutations” (1973, p9) and “historical accidents” (1988, p20) are the raw material of the mind's evolution, and it is a capacity to generate and accumulate a superfecundity of variations that makes possible the adaptive learning and innovations that are necessary to accommodate the open-ended problem of survival.

It is on this point that Hayek's epistemology most clearly moves in the direction of Marvin Minsky's (1995) approach to machine learning. According to Minsky, co-founder of MIT's Artificial Intelligence Laboratory, the “wrongheadedness” (ibid., p163) of early artificial intelligence research was its emphasis on preprogramming the best strategies for dealing with particular situations. But what Minsky and others realized is that one does not understand something unless one understands it in “several different ways” (ibid.) at the same time. That is to say, “if you understand something in just one way, and the world changes a little bit and that way no longer works, you're stuck, you have nowhere to go. But if you have three or four ways of representing the thing, then it would be very hard to find an environmental change that would knock them all up” (ibid.). To then rely one kind of strategy is to invite cognitive paralysis. The 'trick' to productively interacting with the world, therefore, is to “accumulate different viewpoints” (ibid.) so that there are alternatives standing by when one fails.

This essentially is Hayek's view. He recognizes that “the immediate effects of...conflicting experiences will be to introduce inconsistent elements into the model of the external world; and such inconsistencies can be eliminated only if what formerly were treated as elements of the same class are treated as elements of different classes” (1952, p169). In other words, neural patterns based on past neural connections do “not always work,” and such events force the mind to adapt its approach to incorporate novel experiences (ibid., p168). According to Hayek, such adaptation can occur because the mind has “a large repertoire of...patterns...provid[ing] the master moulds (templates, schemata, or Schablonen) in terms of which will be perceived many other complex phenomena in addition to those from which the patterns are derived” (1967b, p51). In short, we are never of only one mind. We have numerous 'recruits' ready to perform. Accordingly, the mind assimilates inconsistent impulses when it eliminates ineffective 'recruits' and conforms to ones that are more instrumentally fit. The mind is thus “autoepistemic,” learning through the default logic of the natural selection process or what artificial intelligence researchers call “nonmonotonic reasoning” (Antoniou 1996).

Robert DeVries (1994) provides an instructive metaphor for this default process. Suppose that someone with little formal education, but plenty of curiosity, began to contemplate the linear order of sentences. This hypothetical person develops a list of various rules to explain how sentences can be transformed into questions. He then notices the following regularities:

John has called his sister.

Peter can buy a bicycle.

People won't die.

and

Has John called his sister?

Can Peter buy a bicycle?

Won't people die?

What does our aspiring linguist do? He uses the following rule: statements can be transformed into questions by reversing the order of the first two words of a sentence. But then our would-be linguist encounters some new experiences that upset his rule.

The big house is cheap.

People without lungs will die.

and

Is the big house cheap?

Will people without lungs die?

If his rule were applied to the above questions they would have the following syntactic form:

Big the house is cheap?

Without people lungs will die?

As these questions are meaningless, our linguist's first rule—although previously effective—no longer accounts for his experiences; that is, the first and second words of a sentence are shown not to be the fundamental elements of the grammatical world. When this refutation occurs, our linguist is compelled to employ an alternate rule to adapt to his new experiences. He may then default to a second rule, one that states that sentences can be transformed into questions by reversing the order of the object and the finite verb. This second rule has greater adaptive advantage; not only does it explain his new experiences, but it would have explained his old experiences had he employed it before. In short, the second rule is more instrumentally fit than the first.

Similarly, Hayek conceives of the mind's successful neural patterns and combinations of patterns as “rules” (1967a, p67). But unlike our metaphor, he does not use this term to mean explicit instructions. For Hayek, neural rules comprise instructions of an unarticulated sort. They consist, instead, of an implicit “capacity” (1984a, p257) to effectively survive in a given environment; a “knowing how” rather than a “knowing that,” to borrow from Gilbert Ryle (1945-6). The rules Hayek speaks of thus do not imply an awareness of purpose on the part of the mind but merely that it embodies “regularities of conduct” (1967a, p67) conducive to its maintenance—presumably because those that operate in certain ways have a better chance for survival than those that do not. In other words, minds are successful because they adapt to facts that are not known, and this success is brought about by the discovery of neural patterns that are not designed, but followed in action. Or, to put it differently, the mind does not consist primarily of insight into the relationship between resources and objectives, but of the blind selection of rules that enable it to endure. Indeed, says Hayek, trial-and-error learning “is a process not primarily of reasoning but of...the development of practices which have prevailed because they were successful...[and] the result of this development will...not be articulated knowledge but a knowledge which...cannot [be] stat[ed] in words but merely...honor[ed] in practice” (1973, p18). Our evolutionary mind, therefore, must be conceived of as having a meta-structure, of “being guided by rules of which we are not conscious but which in their joint influence enable us to exercise extremely complicated skills without having any idea of the particular sequence of movements involved” (1978c, p38).

This ignorance of the particular sequence of movements involved in the mind's evolution exemplifies what Hayek calls the “primacy of the abstract” (ibid., p35). The primacy that Hayek is concerned with is chronological. He contends that the concrete particulars we experience “are the product of abstractions which the mind must possess in order that it should be able to experience particular sensations, perceptions, or images” (ibid., pp36-37). Thus, by abstraction Hayek does not mean something complicated, but “presuppositions” (1989, p63). That is to say, the mind depends upon, or is secondary to, the unintended discovery of regularities of conduct that are obeyed in practice, but which are not deliberately devised. “The formation of abstractions,” he says, “ought to be regarded not as actions of the human mind but rather as something which happens to the mind” (1978c, p43); i.e. the formation of a rule seems “never to be the outcome of a conscious process, not something at which the mind can deliberately aim, but always a discovery of something which already guides its operation” (ibid.).

Hayek insists, however, that just because the evolutionary mind “does not so much make rules as consist of rules,” (1973, p18) it does not follow that its operation should be characterized as 'sub' conscious. Hayek puts this point clearly when he explains that it “is generally taken for granted that in some sense conscious experience constitutes the 'highest' level in the hierarchy of mental events, and that what is not conscious has remained 'subconscious' because it has not yet risen to that level” (1978c, p45). And, Hayek does not doubt that many mental processes through which stimuli evoke actions do not become conscious because they proceed literally on too low a level, “but this is no justification for assuming that all the [cognitive] events determining action to which no distinct conscious experience corresponds are in this sense subconscious” (ibid.). Instead, Hayek maintains that if his conception is correct, then processes of

which we are not even aware determine the sensory qualities which we consciously experience, [and] this would mean that of much that happens in our mind we are not aware, not because it proceeds at too low a level but because it proceeds at too high a level. It would seem more appropriate to call such processes not 'subconscious,' but 'superconscious,' because they govern the conscious process without appearing in them. This would mean that what we consciously experience is only part, or the result, of processes of which we cannot be conscious (ibid.).

Thus, to paraphrase Carl Gustav Jung (Calvin 1996), just as we are not able to see the stars that are above during the day because the sun is too bright, unconscious cognitive activity is going on all the time—we simply cannot discern it through the medium of consciousness. If, on the other hand, we could know our mind's evolutionary processes, we could know the rules upon which our connectionist mind is based. But this, Hayek notes, is impossible; we cannot self-consciously know the evolutionary activity to which all our conscious thoughts necessarily refer. In order to describe such knowledge, we would need to know how it is conditioned and determined. But in order to describe this knowledge, we would need to possess additional knowledge on how it is conditioned and determined, and so on ad infinitum. Such a mind would soon find itself locked in a perpetual cycle of introspective analysis analogous to what computer scientists call an 'infinite loop' error. “The whole idea of the mind explaining itself is [thus] a logical contradiction” (1952, p192).4

Instead, Hayek conceives of the mind as a complex adaptive system or “spontaneous order” (1978a, p250), a concept that forms the basis of his argument against the “constructivistic” (1978b, p3) fallacy that evolutionary phenomena, like “life, mind, and society” (1973, p41), must be centrally ordered. This fallacy is a variation of William Paley's “argument from design” (Popper 1987, p13), an argument that holds that if you find a complex structure, like a watch, you cannot doubt that it was designed by a watchmaker. So when you consider a complex structure, like the mind, you are bound to conclude that it must be designed by a directing consciousness.

Hayek vigorously disputes the argument from design. Throughout his writings, he makes the distinction “between an order which is brought about by the direction of a central organ...and the formation of an order determined by the regularity of actions toward each other of the elements of a structure” (1967a, p73). The former is a designed order, the later is a “spontaneous order.” Hayek further uses Michael Polanyi's notion of “monocentric” and “polycentric” (ibid.) orders to clarify this distinction: A monocentric order is organized by a directing core, a polycentric order, on the other hand, emerges out of “the relation and mutual adjustments to each other of the elements of which it consists” (ibid.). A polycentric order, in other words, is a “self-organizing” (1984a, p259) system; a system that “dispenses with the necessity of first communicating all the information on which its several elements act to a common centre” (1967a, p74) and operates, instead, through the trial-and-error interaction of many parts. In the case of the mind, this process gradually builds up a mental “geography” (1952, p109) or “sensory order” (1952, passim) suitable for effectively navigating the external world, or as artificial intelligence theorist Teuvo Kohonen (1995) might put it, the mind is a “self-organizing map” derived from the “unsupervised learning” of natural selection.

Yet this is not to say that the mind is merely the sum of its parts. Hayek rejects the idea of “one-directional laws of cause and effect” and supports the idea of “downward causation” (1989, p93). Downward causation is the argument that the mind operates like a feedback loop in which the whole is constrained by the micro-level activity, and the micro-level activity is constrained by the whole. Or, to put it another way, the local interactions of the parts give rise to a collective pattern or global dynamic and, in turn, this global dynamic sets the context in which the local interactions occur. It is important to remember, however, that this is not an argument against materialism; downward causation does not break the causal chain, but simply turns the chain inward on itself.

Before we turn our attention to how this account of the mind's operation undermines the idea of free will, two additional points bear noting. First, Hayek makes it clear that the discovery of neural rules is not going on in only one mind, but in everyone's mind, and that the discoveries made in one mind can “infect” (1967b, p47) other minds through speech and example. As such, he argues that humans are intelligent, in part, because neural rules can be accumulated and transmitted from person to person, generation to generation. “What we call the mind,” says Hayek, “is not something that the individual was born with...but something his genetic equipment helps him acquire, as he grows up...by absorbing the results of a tradition that is not genetically transmitted” (1988, p22).5 In other words, language, morals, law, etc., are not discovered ex nihilo by each mind, but simply constitute an epidemic of “imitation” (ibid., 24), of successful neural rules combining and spreading through populations. Under this view, “learning how to behave is more the source than the result of insight, reason, and understanding” (ibid., p21) and “it may well be asked whether an individual who did not have the opportunity to tap such a cultural tradition could be said even to have a mind” (ibid., p24).

Second, we must consider what Elliott Sober (1994) calls the “problem of the units of selection” (ibid., pxii). According to Sober, the question we should ask in cases of evolutionary phenomena is: “What kinds of objects should we regard as the relevant beneficiaries” of natural selection (ibid.)? For example, did opposable thumbs evolve because they helped guarantee the transmission of the genes of the carrier organism, or because they helped individual creatures to survive, or because they helped the whole species avoid extinction (ibid.)? With regard to Hayek's evolutionary epistemology, our attention has focused primarily on the trial-and-error discovery of useful neural rules. But what is the relevant unit of selection? i.e. What is it that is really being selected? Is it the particular neural pattern, or the individual whose mind makes the useful discovery, or the cultural group whose survival is improved by the spread of useful knowledge? Moreover, should our approach to this question be “variational” or “developmental” (Sober 1993)? A variational account refers to how, given a population of differentiated units, only those with certain qualities will survive. A developmental account refers to how individual units react to change and that only those adapting to the new situation will survive. The success of neural rules conforms to the variational account, while the success of the overall order of the mind conforms to the developmental account. At the same time, the success of a particular individual depends upon their cognitive attributes (a variational account), and the success of a culture depends upon its capacity to adapt to changing conditions (a developmental account). According to Hayek, these morphological issues are not easily unraveled because the mind—besides being subject to pressures of genetic and biological selection—is embedded in three other levels of selection: neural, individual, and cultural (Herrmann-Pillath 1994).

The Implication for Free Will and Hayek's Response

Hayek's view that the mind is a complex adaptive system or “spontaneous order” holds a significant implication for the age-old controversy about free will—defined as a will that is not the exclusive and necessary result of the interaction of physical material. As far as we have seen, the mind consists of matter and its relations, and since everything can be realized in these materialist terms, there is simply no room for freedom of will. Indeed, it is another way of saying that our choices, judgments, and decisions are determined by the operation of the material that constitutes ourselves and the world, or as Oxford scholar John Gray summarizes Hayek's view, “our ideas are merely the visible exfoliation of spontaneous forces” (1986, p30). But if this account is correct, why should we do anything purposeful at all? Doesn't Hayek's materialism destroy the idea of goal-directed action?

Not so fast, responds Hayek; we can never introspectively predict how our mind is to be determined. Instead, “we can know [our mind] only through directly experiencing it” (1952, p194). With regard to the issue of goal-directed action, then, Hayek makes it clear that his materialism makes no practical difference in our daily lives; we must still conduct ourselves as if we are free because we can never know how we are meant to behave. Indeed,

we may...well be able to establish that every single action of a human being is the necessary result of the inherited structure of his body (particularly of its nervous system) and of all the external influences which have acted upon it since birth. We might be able to go further and assert that if the most important of these factors were in a particular case very much the same as with most other individuals, a particular class of influences will have a certain kind of effect. But this would be an empirical generalization based on a ceteris paribus assumption which we could not verify in the particular instance. The chief fact would continue to be, in spite of our knowledge of the principle on which the human mind works, that we should not be able to state the full set of particular facts which brought it about that the individual did a particular thing at a particular time (1989, pp86-87).

Hayek thus salvages the idea of goal-directed action from the grips of materialism by maintaining that we cannot avoid acting as if we are free because we are never in a position to know how we are determined to behave. In other words, Hayek does not assert that our will is free, but that we are incapable of knowing how to behave like our will is unfree.

In order to gain a fuller understanding of this argument, we must begin with the recognition that Hayek is a materialist without being a reductionist. Or as he puts it, “those whom it pleases may express this by saying that in some ultimate sense mental phenomena are 'nothing but' physical processes; this, however, does not alter the fact that in discussing mental processes we will never be able to dispense with the use of mental terms [for] we shall never be able to explain [them] in terms of physical laws” (1952, p191). Our minds, he contends, “must remain irreducible entities” (ibid.).

A primary obstacle to reduction, says Hayek, stems from the mind's interconnectivity. This occurs because the mind's elements—sensory experiences—are linked to one another in such a way that they actually determine what the others are through their interconnections. The mind, in other words, is a quality of arrangements; “its actions are determined by the relation and mutual adjustment to each other of the [multiple] elements of which it consists” (1967a, p73) and the multitude of connections “proceeding at any one moment, can mutually influence each other” (1952, p112). Thus, adding or removing even one sensory experience will change all the others in some subtle way.

The practical implication of such interconnectivity is that a sensory experience cannot be analyzed without reference to the other sensory experiences that a mind has encountered; that is, in order to describe a sensory experience all the way through, one must describe its relations to other experiences, which are, in turn, are related to still other experiences, and so on in an infinite regress. Logically, any attempt to describe precisely a sensory experience would have to take into account the complete order that emerges from a person's previous sensory experiences. As a result, the mind cannot be broken down into linear, A causes B terminology and reassembled into an explanation of the whole. No sensory experience is autonomous. Rather, all sensory experiences are embedded in complex relations with other sensory experiences. The relations change the experience so that it constitutes more than itself. It “resonates” with what Jaques Derrida (1976) might call “traces” of something “other.” Consequently, where one experience ends and another begins is undecidable. There are only sensory experiences in relations to other sensory experiences; their essence lies in their relations to the others and their effects on the same.

Another obstacle to reduction, says Hayek, has to do with the mind's dynamic quality. This occurs because the order of the mind is constantly being updated; that is, when the mind encounters a new bit of sensory data, it is itself altered by that data—it recontextualizes. The mind, in other words, successively publishes revised editions that incorporate the immediately preceding sensory experience. What results, to paraphrase Hayek's student Ludwig Lachmann (Garrison 1987), is a “kaleidic” process in which the order that we call the mind is continually cascading into new and novel patterns.

Given this, the mind is not a closed system. A closed system is like a finite collection of musical notes, where the possible patterns that can be played today are identical to the possible patterns that can be played next week, next year, or next century. But what happens when the unity of the system is broken and a new note is introduced? The whole nature of possible permutations changes. No possible permutation of the former set of notes can replicate a sequence containing the new note. The introduction of a new note, therefore, dramatically changes the possible outcome of all future scenarios.

Similarly, the introduction of a new sensory experience alters the mind's possible future scenarios. That is, each new sensory experience one witnesses will be interpreted within the context of an updated network of neural connections, one that incorporates the immediately preceding sensory information. As a consequence, each contemplation is unique or, as Heraclitus might have put it, you cannot step into the same stream of thought twice. The order of the connections in the mind, explains Hayek,

is modified by every new action exercised upon it by the external world, and since the stimuli acting on it do not operate by themselves but always in conjunction with the process called forth by the preexisting excitatory state, it is obvious that the response to a given combination of stimuli on two different occasions is not likely to be exactly the same. Because it is the whole history of the organism which will determine its action, new factors will contribute to this determination on the latter occasion which were not present in the first. We shall find not only that the same set of external stimuli will not always produce the same responses, but also that altogether new responses will occur (1952, p123).

What this suggests is that even if we could know the precise order and intensity of new experiences, this would not enable us to explain why a mind responds the way it does. The reason for this is the actual impossibility of ascertaining the particular circumstances which, in the course of a lifetime of experiences, have decided the emergence and trajectory of the complex order that we call the mind. In other words, the mind is biographical, and its manifestation is dependent upon a staggeringly long and statistically unrepeatable sequence of variables and intensities. Indeed, to paraphrase paleobiologist Stephen J. Gould (1989), wind back the tape of the mind to its early days; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like the identical mind will grace the replay (ibid. p14). Subsequently, a more appropriate question to ask than Thomas Nagel's (1974) famous “what is it like to be a bat?” is “what is it like to be another person?” Since each mind is historically fingerprinted, this cannot be known. An identical sensory experience would require “an identical history”—a requirement that ultimately “precludes the possibility that at any moment the maps [or minds] of two individuals should be completely identical” (1952, p110). Thus, although people can refer to the same sensory experience, it neither follows that it has the same location or intensity in their evolutionary mind, nor that all the connections that extend from it are the same. Each experience is, in this sense, 'private'—just as there are no two identical snowflakes, there are no two identical sensory experiences of a snowflake.

In conclusion, it should not be difficult now to recognize that although Hayek rejects the idea of free will, he accepts the idea of a subjective will; that is, a willfulness unique to each individual. It should also not be difficult to recognize the predictive limitations applying to explanations of such a will. In fact, Hayek rejects the possibility of “specific prediction” in the case of the individual will and finds that such a goal is “completely unjustified” (1989, p88). He maintains, rather, that specific prediction of the will could “be achieved only if we were able to substitute for a description of events in...mental terms a description in physical terms which included an exhaustive enumeration of all the physical circumstances which constitute a necessary and sufficient condition of the...mental phenomena in question” (ibid.). But, as has been argued, viewing the mind as a “spontaneous order” creates an “impossibility of ascertaining all the particular data required to derive detailed conclusions” (ibid., p86). As a result, says Hayek, “the individual personality [will] remain for us as much a unique and unaccountable phenomenon...but whose specific actions we [can] generally not predict or control, because we [can] not obtain the information on all the particular facts which determined it” (ibid., pp86-87). In other words, even though we may know the general principle by which the complex adaptive system we call the mind is causally determined by evolutionary processes, this does not mean that a particular human action can ever be introspectively recognized as the necessary result of a particular set of facts. Indeed, Hayek maintains that we are in no better position to predict the specific future motions of our mind than we are “able to predict the shape and movement of [a] wave that will form on the [surface of the] ocean at a particular place and moment in time” (1984, p243). Returning to the topic of artificial intelligence, this raises an important closing observation. If the same quality of irreducibility applies to intelligent machines, then they too will face limits to introspection. But more significantly, it will also mean that we will be incapable of recursively describing their evolved will; that is, we won't be able to tell from their operation the precise sequence and relation of events that contributed to their specific manifestation. As a result, if Hayek's epistemological insights hold for artificially intelligent machines, we can already recognize an imminent limitation on our ability to predict and/or plan their behavior. We shall see. 6


REFERENCES

Antoniou, G. (1996) Nonmonotonic Reasoning. MIT Press: Cambridge.

Baudry, M. and Davis, J. (1996), Long-Term Potentiation, Vol. 3. MIT Press: Cambridge.

Birner, J. (1995) The Surprising Place of Cognitive Psychology in the Work of F. A. Hayek, unpublished paper. University of Maastricht, Department of Economics.

Buckle, S. (1991) Natural Law and the Theory of Property: Grotius to Hume. Clarendon Press: Oxford.

Calvin, W. H. (1987) The Brain as a Darwin Machine. Nature. 330:33-34.

———. (1996a) The Cerebral Code. MIT Press: Cambridge.

———. (1996b) How Brains Think. Basic Books: New York.

Campbell, D. (1960) Blind Variation and Selective Retention in Creative Thought As in Other Knowledge Processes. Psychological Review, 67:380-400.

———. (1974) Evolutionary Epistemology. In: Schilpp, P. (ed) The Philosophy of Karl Popper. Open Court: La Salle, Ill.

Dennett, D. (1995) Intuition Pumps. In Brockman, J. (ed) The Third Culture. Simon & Schuster: New York.

Derrida, J. (1976) On Grammatology. Johns Hopkins University Press: Baltimore.

DeVries, R. (1994) “The Place of Hayek's Theory of Mind and Perception in the History of Philosophy and Psychology. In Birner, J. and Van Zijp, R. (eds) Hayek, Coordination, and Evolution: His Legacy in Philosophy, Politics, Economics, and the History of Ideas. Routledge: London.

Edelman, G. M. (1982) Through a Computer Darkly: Group Selection and Higher Brain Function. Bulletin of the Academy of Arts and Sciences.

———. (1987) Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books: New York.

Galeotti, A. (1987) Individualism, Social Rules, Tradition: The Case of Friedrich Hayek. Political Theory. 15:163-181.

Garrison, R. (1987) The Kaleidic World of Ludwig Lachmann. Critical Theory, 1:77-90.

Gould, S. J. (1989) Wonderful Life. W. W. Norton: New York.

Gray, J. (1986) Hayek on Liberty, 2nd ed. Basil Blackwell: Oxford.

Hayek, F. (1952) The Sensory Order. The University of Chicago Press: Chicago.

———. (1967a) Notes on the Evolution of Systems of Rules of Conduct. Studies in Philosophy, Politics, and Economics. The University of Chicago Press: Chicago.

———. (1967b) Rules, Perception, and Intelligibility. Studies in Philosophy, Politics, and Economics. The University of Chicago Press: Chicago.

———. (1967c) The Theory of Complex Phenomena. Studies in Philosophy, Politics, and Economics. The University of Chicago Press: Chicago.

———. (1973) Law, Legislation, and Liberty: Rules and Order. The University of Chicago Press: Chicago.

———. (1978a) Dr. Bernard Mandeville. New Studies in Philosophy, Politics, Economics and the History of Ideas. The University of Chicago Press: Chicago.

———. (1978b) The Errors of Constructivism. New Studies in Philosophy, Politics, Economics and the History of Ideas. The University of Chicago Press, Chicago.

———. (1978c) The Primacy of the Abstract. New Studies in Philosophy, Politics, Economics and the History of Ideas. The University of Chicago Press: Chicago.

. (1984a) Competition As a Discovery Procedure. In Nishiyama, C. and Leube, K. (eds) The Essence of Hayek. Stanford University Press: Stanford.

———. (1984b) Philosophical Consequences. In Nishiyama, C. and Leube, K. (eds) The Essence of Hayek. Stanford University Press: Stanford.

———. (1988) The Fatal Conceit, The University of Chicago Press: Chicago.

———. (1989) Order: With or Without Design? The Center for Research Into Communist Economies: London.

———. (1994) Hayek on Hayek. Chicago University Press: Chicago.

Herrmann-Pillath, C. (1992) The Brain, Its Sensory Order, and the Evolutionary Concept of Mind: On Hayek's Contribution to Evolutionary Epistemology. Journal of Social and Evolutionary Systems, 15:145-186.

———. (1994) Evolutionary Rationality, 'Homo Economicus,' and the Foundations of Social Order. Journal of Social and Evolutionary Systems, 17:41-69.

Jonker, A. (1991) F. A. Hayek: The Sensory Order. Cognitive Science, 3(2):103-127

Kohonen, T. (1995) Self-Organizing Maps, Vol. 30. Springer: New York.

Kukathas, C. (1989) Hayek and Modern Liberalism. Clarendon Press: Oxford.

Lorenz, K. (1977) Behind the Mirror. Methuen: London.

———. (1982) Kant's Doctrine of the A Priori in the Light of Contemporary Biology. In Plotkin, H. (ed) Learning, Development, and Culture. John Wiley & Sons: New York.

McColloch, W. and Pitts, W. (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics. 5:115-133.

Minsky, M. (1995) Smart Machines. In Brockman, J. (ed) The Third Culture. Simon & Schuster: New York.

Nagel, T. (1974) What Is it Like to Be a Bat? Philosophical Review. 83:435-451.

Nicolis, G. and Prigogine, I. (1989) Exploring Complexity. W. H. Freeman: New York.

Popper, K. (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.Harper Tourchbooks: New York.

———. (1972) Objective Knowledge: An Evolutionary Approach. Clarendon Press: Oxford.

———. (1984) Evolutionary Epistemology. In Pollard, J. (ed) Evolutionary Theory: Paths into the Future. John Wiley & Sons: London.

———. (1987) Natural Selection and the Emergence of Mind. In Radnitzky G. and Bartley, W. (eds) Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Open Court; La Salle, Ill.

Rosenblatt, F. (1958) The Perceptron: A Probabalistic Model for Information Storage and Organization in the Brain. Psychological Review. 65:386-408.

Ryle, G. (1945-46) Knowing How and Knowing That. Proceedings of the Aristotelian Society. 46:1-16.

Smith, B. (1996) The Connectionist Mind: A Study of Hayekian Psychology. In Frowen, S. F. (ed) Hayek the Economist and Social Philosopher: A Critical Retrospect. Macmillan: London.

Sober, E. (1993) The Nature of Selection: Evolutionary Theory in a Philosophical Focus. The University of Chicago Press: Chicago.

———. (1994) Preface. In Sober, E. (ed) Conceptual Rules in Evolutionary Biology 2nd ed. MIT Press: Cambridge.

Streit, M. E. (1993) Cognition, Competition, and Catallaxy: In Memory of Friedrich August von Hayek. Constitutional Political Economy. 4:223-262.

Vanberg, V. (1994) Rules and Choices in Economics. Routledge: London.

Weimer W. (1982) Hayek's Approach to the Problems of Complex Phenomena: An Introduction to the Psychology of the Sensory Order. In Weimer W. and Palermo D. (eds) Cognition and the Symbolic Processes, Vol. 2. Lawrence Erlbaum: Hillsdale: N.J.

Footnotes

1 According to the introduction to Hayek's autobiographical dialogue Hayek on Hayek (1994): “When the university closed down in the winter of 1919-20 for lack of heating fuel, Hayek went to Zurich, where, in the laboratory of the brain anatomist [Constantin] von Monakow, he had his first encounter with the fibre bundles that make up the human brain....Yet the vision on the bundles of brain fibres which he had examined...stayed in his mind. He wrote a paper [entitled 'Contribution to the Theory of the Development of Consciousness'] wherein he tried to trace the progress of sensations (neural impulses) to the brain, where they assume the shape and sense of a perception. By the end of the paper he realized that [Ernst] Mach was wrong. Pure sensations cannot be perceived. Interconnections in the brain must be made; some sort of classification that can relate past experiences to present experiences must take place. Hayek began to grope his way toward a solution of a problem not previously recognized: How can order create itself? The solution sounded part Kant, part Darwin...It would eventually be pure Hayek” (pp3-5).

2 For further discussion of Hayek's theory of mind, see Weimer (1982), Jonker (1991), Herrmann-Pillath (1992), Streit (1993), and Birner (1995).

3 For example, Edelman (1987) writes: “Consider the two lines of the Wudt-Hering illusion...This rather banal exercise serves to demonstrate that there is only a rough correspondence between what has been called the sensory order and the physical order. Furthermore, it bears upon point...that the perceptual world is a world of adaptation rather than a world of complete verisicality” (p28). Elsewhere, Edelman (1982) writes: “[Hayek] made a quite fruitful suggestion, made contemporaneously by psychologist Donald Hebb, that whatever kind of encounter the sensory system has with the world, a corresponding event between a particular cell in the brain and some other cell carrying the information from the outside world must result in the reinforcement of the connection between those cells. These days, this is known as a Hebbian synapse, but von Hayek quite independently came upon the idea. I think the essence of his analysis still remains with us” (p25).

4 Elsewhere, Hayek employs Kurt Gödel's incompleteness theorem to make the point that we cannot know our own mind because the 'set' that we call the mind cannot logically contain itself (1967b, p62).

5 This 'memetic' view not only provides a mechanism for minds to limit/correct catastrophic errors in society, but for society to limit/correct catastrophic errors in the mind. In other words, the system that we call the mind can act to 'regulate' the system of society and the system of society can act to 'regulate' the system that we call the mind. Moreover, when these systems are taken as one, this larger system can act in 'self-regulating' ways.

6 The author recently received his M.A. in political theory from the College of William and Mary and has been published in The Southern Journal of Philosophy. He wishes to thank Michael Giberson and an anonymous referee for their helpful comments and criticisms, to which the standard disclaimer applies. He also wishes to especially thank Manfred Wimmer for his assistance and indefatigable patience.

The reference for the published version of this paper is: Dempsey, G. T. (1996) Hayek's Evolutionary Epistemology, Artificial Intelligence, and the Question of Free Will, Evolution and Cognition, 2:139-150.