Comments on Artificial Intelligence, part 1

2019/01/01

Necessary Redefinitions and Clarifications

No discussion on artificial intelligence makes sense without a re-evaluation of a few cherished concepts. What qualifies for life? Intelligence? Sentience? Our current pictures of these ideas are rooted in hundreds of thousands of pages of philosophy and science. However, they are incomplete, suffering from a common ailment: insufficient sample size. When the borders of the concepts are analyzed more closely, they become fuzzy.

In order to have a meaningful discussion about whether life can exist in silica, we have to dismiss concepts which exclude digital forms of life by definition alone. Fortunately, the literature exists to show that there is already a tradition of doing just that.

Life

A good definition of life can be found on Wikipedia. While there is no truly authoritative definition of “life” (a fact which itself means something), everybody agrees on a few of the fundamentals. Most definitions are biologically chauvinistic: they imply that the only things we can call “alive” are those things which are, like us, cellular creatures formed of organic tissues and “coded” by DNA.

Below that you will see alternative definitions of life. These alternative definitions are important to us, particularly because they

… [distinguish] life by the evolutionary process rather than its chemical composition.

In fact, I would suggest a much stronger definition: the “strong alife” (“artificial life”) position. In doing so, I am not alone. In a potentially apocryphal quote, the computer scientist (and towering genius) John von Neumann is said to have stated that

Life is a process which can be abstracted away from any particular medium.

One of the early artificial life researchers, a man named Tom Ray, wrote a paper titled “An approach to the synthesis of life” in 1991. The paper is an extremely good read, but it can be a bit opaque: he speaks of his computer simulation Tierra, a highly complex cyber-ecosystem simulation which seeks to prove evolutionary complexity among digital “organisms” (self-modifying scraps of self-replicating code competing for computer resources). Tierra is featured extensively in Steven Levy’s 1993 book Artificial Life, which I never returned to my local library in high school (whoops).

In the paper we find Ray express a strong alife position in no uncertain terms.

Ideally, the science of biology should embrace all forms of life. However in practice, it has been restricted to the study of a single instance of life, life on earth. Life on earth is very diverse, but it is presumably all part of a single phylogeny. Because biology is based on a sample size of one, we can not know what features of life are peculiar to earth, and what features are general, characteristic of all life.

[…]

The intent of this work is to synthesize rather than simulate life. This approach starts with hand crafted organisms already capable of replication and open-ended evolution, and aims to generate increasing diversity and complexity in a parallel to the Cambrian explosion.

To state such a goal leads to semantic problems, because life must be defined in a way that does not restrict it to carbon based forms. It is unlikely that there could be general agreement on such a definition, or even on the proposition that life need not be carbon based. Therefore, I will simply state my conception of life in its most general sense. I would consider a system to be living if it is self-replicating, and capable of open-ended evolution. Synthetic life should self-replicate, and evolve structures or processes that were not designed-in or pre-conceived by the creator (Pattee, 1989).

Many arguments can be (and have been) made about the validity of the strong alife hypothesis, in either direction. I will adopt the strong alife hypothesis and believe that the organic chauvinism of the traditional definition is unsuitable for a world in which intelligent agents may arise in silicon. (The moral philosophy of artificial intelligence is outside the scope of this article. If you’re in for the ride, here’s a Wikipedia rabbit hole.) Later I will also adopt the “strong AI” position as a consequence of some ideas by futurist Ray Kurzweil (not to be confused with John Searle’s definition of “strong AI”, see “Chinese Room”).

Intelligence

The first sentence on the Wikipedia article says this:

Intelligence has been defined in many ways, including: the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, and problem solving. More generally, it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

A quite inclusive definition, since it’s already intended to include nonhuman intelligence. (Of course, as with “life”, no authoritative definition of “intelligence” exists. Every field that claims to study intelligence does so with its own definition, tools, and metrics. While the related concept of the intelligence quotient (IQ) has strong predictive validity, it doesn’t actually “measure” anything – as an entirely statistical model, it leaves as many questions unanswered as it can answer. Here is a strong paper covering some of the questions about IQ and why it doesn’t exactly correlate to “intelligence”.)

If you’re interested in the myriad definitions of intelligence, there is a non-exhaustive (but reasonably representative) list here on the same page.

As a definition, the Wikipedia definition of intelligence will work just fine, although we must remember that whatever means an artificial lifeform would achieve intelligence would necessarily be very different from our own, as an artificial intelligence lacks the millions of years of embodied evolution which gives us the sophisticated machinery of the brain, only recently turned towards cognitive complexity and technological development.

A passage from Nick Bostrom’s excellent Superintelligence: Paths, Dangers, Strategies will shed light on true machine intelligence (differentiated from the kinds of highly-complex programs (“expert systems”) that dominate public discussion of AI):

But let us note at the outset that however many stops there are between here and human-level machine intelligence, the latter is not the final destination. The next stop, just a short distance farther along the tracks, is superhuman-level machine intelligence. The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by.

The mathematician I. J. Good, who had served as chief statistician in Alan Turing’s code-breaking team in World War II, might have been the first to enunciate the essential aspects of this scenario. In an oft-quoted passage from 1965, he wrote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

People who believe in the coming of the Singularity, a radical transformation of human life and society by the exponentially-increasing intelligence of a “seed AI” (the “ultraintelligent machine” mentioned by Good), have pointed out repeatedly that it is naive to consider intelligence as ending with “Humanville Station”. If you were plot intelligence on a line (an oversimplification, but a useful one), the gap between a recognized genius (Einstein, Newton, Neumann, etc) and your local supermarket cashier is a fraction of the gap between your average human and your average dog. It is comforting to believe that human intelligence is the final word on cognitive complexity, but there is no reason to believe this beyond absence-of-evidence – and plenty of reasons to believe that humans aren’t nearly as intelligent as we believe we are.

The way I see it, there are exactly two likely options:

In my estimation, there are a few other options, which are unlikely:

Consciousness and Free Will

Perhaps you believe you have found the silver bullet against artificial intelligence: the increasingly nebulous concept of “consciousness”. Consciousness, as defined by Wikipedia, no doubt exists:

Consciousness is the state or quality of awareness or of being aware of an external object or something within oneself. It has been defined variously in terms of sentience, awareness, qualia, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood or soul, the fact that there is something “that it is like” to “have” or “be” it, and the executive control system of the mind.

Most humans, regardless of the difficulty in defining the term, will agree that they are conscious, and that other humans are also conscious. Those who do not agree are simply pedants, arguing an outdated concept of consciousness and dragging in equally outdated concepts of free will.

I am a compatibilist, which I define simply as the view that you are not free to make “real” choices – your life, as every other physical object, is strapped to an already-existing 4-D spacetime shape which includes the future, which is decided by entirely physical phenomena. This kind of perspective has been likened to religious answers to questions such as “if God knows you’re going to do X, how can he punish you for it?”: the fact that your choice was pre-determined doesn’t mean you didn’t make it. To a compatibilist, the question of whether an action was taken freely is an ethical or legal question, not a metaphysical one. Your decision-making process, the output of the most complex machine in the entire universe, may feel phenomenologically like a truly “free” choice, but the brain is no more an exception to the mechanical nature of the universe than a falling bowling ball.

Accordingly, I don’t include any conception of free will in my idea of consciousness. You may be a believer in true free will or a strict determinist (both under the umbrella “incompatibilist”). I think both views are naive. For the first, the idea that a human spirit, immune to the laws of causality, is piloting your body is an antiquated one – the mind-body duality problem is not a serious one, in my opinion (Mind/Brain Identity). For the second, the philosophical concept of free will underpins any serious legal system (mens rea, conspiracy, duress, etc) and any serious ethical system (moral responsibility for one’s actions). There need not be an underlying metaphysical reality to a practical definition of free will.

Daniel Dennett, philosopher of science, mind, and biology, has this to say in his 1984 book “Elbow Room”, in a chapter titled “Why Do We Want Free Will?”:

The distinction between responsible moral agents and beings with diminished or no responsibility is coherent, real, and important. It is coherent, even if in many instances it is hard to apply; it draws an empirically real line, in that we don’t all fall on one side; and, most important, the distinction matters: the use we make of it plays a crucial role in the quality and meaning of our lives. […] We want to hold ourselves and others responsible, but we recognize that our intuitions often support the judgement that a particular individual has “diminished responsibility” because of his or her infirmities, or because of particularly dire circumstances upon upbringing or at the time of action. We also find it plausible to judge that nonhuman animals, infants, and those who are severely handicapped mentally are not responsible at all. But since we are all more or less imperfect, will there be anyone left to be responsible after we have excused all those with good excuses? […] We must set up some efficiently determinable threshold for legal competence, never for a moment supposing that there couldn’t be intuitively persuasive “counterexamples” to whatever line we draw, but declaring in advance that such pleas will not be entertained. […] The effect of such an institution […] is to create […] a class of legally culpable agents whose subsequent liability to punishment maintains the credibility of the sanctions of the laws. The institution, if it is to maintain itself, must provide for the fine tuning of its arbitrary thresholds as new information (or misinformation) emerges that might undercut its credibility. One can speculate that there is an optimal setting of the competence threshold (for any particular combination of social circumstances, degree of public sophistication, and so on) that maximizes the bracing effect of the law. A higher than optimal threshold would encourage a sort of malingering on the part of the defendants, which, if recognized by the populace, would diminish their respect for the law and hence diminish its deterrent effect. And a lower than optimal threshold would yield a diminishing return of deterrence and lead to the punishment of individuals who, in the eyes of society, “really couldn’t help it.” The public perception of the fairness of the law is a critical factor in its effectiveness.

With the “free will” red herring out of the way, we may ask: how can a machine be conscious? Easily: the same way we are, or in some roughly analogous way. Dennett defines consciousness in humans in his book “Consciousness Explained” in this way:

In a Thumbnail Sketch here is [the Multiple Drafts theory of consciousness] so far:

There is no single, definitive “stream of consciousness,” because there is no central Headquarters, no Cartesian Theatre where “it all comes together” for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of “narrative” play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its “von Neumannesque” character) is not a “hard-wired” design feature, but rather the upshot of a succession of coalitions of these specialists.

The basic specialists are part of our animal heritage. They were not developed to perform peculiarly human actions, such as reading and writing, but ducking, predator-avoiding, face-recognizing, grasping, throwing, berry-picking, and other essential tasks. They are often opportunistically enlisted in new roles, for which their talents may more or less suit them. The result is not bedlam only because the trends that are imposed on all this activity are themselves part of the design. Some of this design is innate, and is shared with other animals. But it is augmented, and sometimes even overwhelmed in importance, by microhabits of thought that are developed in the individual, partly idiosyncratic results of self-exploration and partly the predesigned gifts of culture. Thousands of memes, mostly borne by language, but also by wordless “images” and other data structures, take up residence in an individual brain, shaping its tendencies and thereby turning it into a mind.

Dennett claims that a conscious mind is “merely” (word selected for its value in understatement) a collection of highly complex agents, modular pieces of brain matter, which evolved in tandem to solve problems. There is some central executive function (or a distribution of central executive functions) which determine how to interpret the output of these many, many different modules which want “our” attention.

This is remarkably similar to the idea of the premier AI theorist and co-founder of the MIT AI lab Marvin Minsky of a society of mind. Wikipedia explains it well:

A core tenet of Minsky’s philosophy is that “minds are what brains do”. The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

An artificial intelligence could (and, I think, must) be constructed along similar lines, as a collection of different modules (neural networks?) managed by a central executive function. This is a suitable starting point for a thought experiment about artificial intelligence.

An Aside: Intelligence and Consciousness

It is easy to believe that intelligence requires consciousness. Peter Watts, an award-winning writer of speculative fiction, argues against this in one of the themes of the book Blindsight. A collection of quotations from the novel are below. If you want context, you should just read the whole thing, available under a Creative Commons license here.

(I will be proceeding in the following parts with the assumption that conscious processes – self-awareness, self-modification, modification of goal-orientation, etc – are necessary for an artificial intelligence. This is not because I believe Watts is incorrect, but because I cannot concieve of an unconscious intelligence in anything but a Lovecraftian manner.)

Imagine you have intellect but no insight, agendas but no awareness. Your circuitry hums with strategies for survival and persistence, flexible, intelligent, even technological—but no other circuitry monitors it. You can think of anything, yet are conscious of nothing. You can’t imagine such a being, can you? The term being doesn’t even seem to apply, in some fundamental way you can’t quite put your finger on.

Evolution has no foresight. Complex machinery develops its own agendas. Brains—cheat. Feedback loops evolve to promote stable heartbeats and then stumble upon the temptation of rhythm and music. The rush evoked by fractal imagery, the algorithms used for habitat selection, metastasize into art. Thrills that once had to be earned in increments of fitness can now be had from pointless introspection. Aesthetics rise unbidden from a trillion dopamine receptors, and the system moves beyond modeling the organism. It begins to model the very process of modeling. It consumes ever-more computational resources, bogs itself down with endless recursion and irrelevant simulations. Like the parasitic DNA that accretes in every natural genome, it persists and proliferates and produces nothing but itself. Metaprocesses bloom like cancer, and awaken, and call themselves I.

The system weakens, slows. It takes so much longer now to perceive—to assess the input, mull it over, decide in the manner of cognitive beings. But when the flash flood crosses your path, when the lion leaps at you from the grasses, advanced self-awareness is an unaffordable indulgence. The brain stem does its best. It sees the danger, hijacks the body, reacts a hundred times faster than that fat old man sitting in the CEO’s office upstairs; but every generation it gets harder to work around this— this creaking neurological bureaucracy.

I wastes energy and processing power, self-obsesses to the point of psychosis. Scramblers have no need of it, scramblers are more parsimonious. With simpler biochemistries, with smaller brains—deprived of tools, of their ship, even of parts of their own metabolism—they think rings around you. They hide their language in plain sight, even when you know what they’re saying. They turn your own cognition against itself. They travel between the stars. This is what intelligence can do, unhampered by self-awareness.

I is not the working mind, you see. For Amanda Bates to say “I do not exist” would be nonsense; but when the processes beneath say the same thing, they are merely reporting that the parasites have died. They are only saying that they are free.

You invest so much in it, don’t you? It’s what elevates you above the beasts of the field, it’s what makes you special. Homo sapiens, you call yourself. Wise Man. Do you even know what it is, this consciousness you cite in your own exaltation? Do you even know what it’s for?

Maybe you think it gives you free will. Maybe you’ve forgotten that sleepwalkers converse, drive vehicles, commit crimes and clean up afterwards, unconscious the whole time. Maybe nobody’s told you that even waking souls are only slaves in denial.

Make a conscious choice. Decide to move your index finger. Too late! The electricity’s already halfway down your arm. Your body began to act a full half-second before your conscious self ‘chose’ to, for the self chose nothing; something else set your body in motion, sent an executive summary—almost an afterthought— to the homunculus behind your eyes. That little man, that arrogant subroutine that thinks of itself as the person, mistakes correlation for causality: it reads the summary and it sees the hand move, and it thinks that one drove the other.

But it’s not in charge. You’re not in charge. If free will even exists, it doesn’t share living space with the likes of you.

Insight, then. Wisdom. The quest for knowledge, the derivation of theorems, science and technology and all those exclusively human pursuits that must surely rest on a conscious foundation. Maybe that’s what sentience would be for— if scientific breakthroughs didn’t spring fully-formed from the subconscious mind, manifest themselves in dreams, as full-blown insights after a deep night’s sleep. It’s the most basic rule of the stymied researcher: stop thinking about the problem. Do something else. It will come to you if you just stop being conscious of it.

Every concert pianist knows that the surest way to ruin a performance is to be aware of what the fingers are doing. Every dancer and acrobat knows enough to let the mind go, let the body run itself. Every driver of any manual vehicle arrives at destinations with no recollection of the stops and turns and roads traveled in getting there. You are all sleepwalkers, whether climbing creative peaks or slogging through some mundane routine for the thousandth time. You are all sleepwalkers.

Don’t even try to talk about the learning curve. Don’t bother citing the months of deliberate practice that precede the unconscious performance, or the years of study and experiment leading up to the gift-wrapped Eureka moment. So what if your lessons are all learned consciously? Do you think that proves there’s no other way? Heuristic software’s been learning from experience for over a hundred years. Machines master chess, cars learn to drive themselves, statistical programs face problems and design the experiments to solve them and you think that the only path to learning leads through sentience? You’re Stone-age nomads, eking out some marginal existence on the veldt—denying even the possibility of agriculture, because hunting and gathering was good enough for your parents.

Do you want to know what consciousness is for? Do you want to know the only real purpose it serves? Training wheels. You can’t see both aspects of the Necker Cube at once, so it lets you focus on one and dismiss the other. That’s a pretty half-assed way to parse reality. You’re always better off looking at more than one side of anything. Go on, try. Defocus. It’s the next logical step.

Oh, but you can’t. There’s something in the way.

And it’s fighting back.

Next

Part 2 will be an exploration of under what conditions we could consider a simulated agent to be “alive”, using the game Dwarf Fortress as a crude starting point.