Close
Close

ESSAYS Mousse 55

Move 37: Artificial Intelligence, Randomness and Creativity: Part 2

by John Menick

 

Left - Piet Mondrian, Composition in line, second state, 1916-1917. © Collection Kröller-Müller Museum, Otterlo. Courtesy: Collection Kröller-Müller Museum, Otterlo. Right - A. Michael Knoll, Computer Composition With Lines, 1964. Created with an IBM 7094 digital computer and a General Dynamics SC-4020 micro-film plotter. Photo: © A. Michael Knoll

 

“One of the pictures is of a photograph of a painting by Piet Mondrian while the other is a photograph of a drawing made by an IBM 7094 digital computer. Which of the two do you think was done by a computer?”

 

“The unconscious is inexhaustible and uncontrollable. Its force surpasses us. It is as mysterious as the last particle of a brain cell. Even if we knew it, we could not reconstruct it.”
—Tristan Tzara

“You insist that there is something a machine cannot do.
If you tell me precisely what it is a machine cannot do, then
I can always make a machine which will do just that.”
—John von Neumann

 

I

Fifty years ago, A. Michael Noll, a Bell Telephone Laboratories engineer, presented one hundred test subjects with two pictures and a questionnaire. The subjects, mostly Bell Labs colleagues, were divided up as any self-respecting engineer might, into “technical” and “non-technical” categories. The former category included physicists, chemists, and computer programmers. The latter was made up of everyone else: secretaries, clerks, typists, stenographers. The gender division was what might be expected for a corporate research center in 1966: mostly men for the technical group, mostly women for the non-technical. Though a portion of the respondents liked abstract art, all subjects were, in Noll’s words, “artistically naive.”

The twin pictures shown to the subjects were black-and-white photocopies of nearly identical paintings. The paintings were geometrically abstract, their style typical of that pioneered by the Neoplasticists almost fifty years prior. Both paintings were composed solely of short horizontal and vertical black lines, most of them arranged into T and L shapes. The lines were contained by an invisible circle; the circle itself was cut off at its four outermost sides, giving the composition a circular and compressed appearance. The distribution of lines in both paintings was similar, but not exactly the same. In both pictures, the T- and L-shaped lines clustered mostly at the left, bottom, and right sides of the containing circle, creating a crescent in negative space. Perhaps the paintings were a diptych; maybe one was a slightly inaccurate copy of the other. The accompanying questionnaire clarified: “One of the pictures is of a photograph of a painting by Piet Mondrian while the other is a photograph of a drawing made by an IBM 7094 digital computer. Which of the two do you think was done by a computer?”

The Mondrian painting was his 1917 Composition in Line. The IBM 7094 picture was titled Computer Composition with Lines. Few respondents were able to tell “which was done by a computer.” This was as true for respondents who claimed to like abstract art as it was for those who claimed not to like abstraction. Oddly, though, a strong preference for abstract art indicated a weaker ability to identify a computer-generated painting. Twenty-six percent of the abstract art enthusiasts correctly identified the computer painting, in contrast to the thirty-five percent of those who disliked abstract art. Noll also asked the subjects which painting they preferred aesthetically. Sixty percent of the subjects preferred the computer painting, and again this correlated strongly with a preference for abstract art. Mondrian, it seems, did poorly even among his own potential enthusiasts.

Mondrian’s Composition in Line, one assumes, was made in the typical manner: hours of studio time, oil paint and brushes, concentration, false starts, fitful developments, final breakthrough. As far as we know, the painting’s horizontal and vertical lines found their place on the canvas due to Mondrian’s sense of pictorial equilibrium, not a chance operation outsourced to the I Ching or a roulette wheel. No one is certain, of course, what led Mondrian to place a particular line at a particular coordinate on the picture plane. We only have assumed intentions, assumptions that tend toward, as the critic tells us, the fallacious. Noll puts it best: “Mondrian followed some scheme or program in producing the painting although the exact algorithm is unknown.”

With Computer Composition with Lines, the exact algorithm is known. It has to be; computers are only ever deliberate. For the Mondrian program to work, every step of the algorithm must be spelled out, instruction by instruction, a picture first written.[1] How, though, could one build an algorithmic Mondrian? Noll understood Mondrian’s general techniques for painting Composition in Line, but the crucial details needed definition. Why was a line placed here, and not there? What determined the length of any given line? The answer, of course, was Mondrian. Without the painter, Noll turned to randomness.

 

II

For a moment, let’s indulge in a fiction. The fiction, which has several parts, concerns machines. For the first part, we have to believe that only humans make machines. This seems easy enough. After all, animals rarely use tools, and none make machines. The second part of the fiction is also easy to believe. It says that living beings are not machines. It says, as an unembarrassed neo-vitalist might, that organisms cannot be explained through mechanical principles. Although long discredited, most non-scientists still believe that an élan vital separates the living from the inanimate. For now, let’s indulge our vitalist biases: there is nothing machinic about life. Finally, we will need to narrow our definition of a machine to the very literal, and avoid all metaphoric uses of the term. Societies cannot be machines. Ecologies cannot be machines. Desire cannot be a machine. Machines are what we naively expect them to be: human-made mechanical systems requiring energy and exerting force. Steam engines, printers, clocks, film cameras, even pendulums and gears. If we are to believe this reduced, fictional definition of a machine then we must admit, above all, that all machines are also deterministic. Machines will only do what they are designed to do, no more. Once a machine’s mechanisms are constructed, its functions are forever fixed. Without human intervention, a steam engine will never become an oil drill; an automobile will never become a camera. In more complex machines, these deterministic functions are decomposable into even more basic mechanics, mechanics that can be precisely described by mathematical physics. Every machine therefore comes with its own accounting, its own strict physical limitations, energy restrictions, productive capabilities. As the philosopher Georges Canguilhem wrote in “Machine and Organism” (1947):

“In the machine, the rules of a rational accounting are rigorously verified. The whole is strictly the sum of the parts. The effect is dependent on the order of causes. In addition, a machine displays a clear functional rigidity, a rigidity made increasingly pronounced by the practice of standardization.”[2]

If machines are strictly deterministic, standardized, if they are never more than the sum of their parts, then it is easy to deny machines creative agency. A deterministic system can only produce the same results; it can only be a medium for creativity. As an example, take the harmonograph—a simple machine that uses two or more pendulums to produce intricately curved line drawings. As the pendulums oscillate, their force moves mechanical arms that, in turn, draw pens across a fixed piece of paper. The resulting drawings are the result of gravitational physics, not a human hand or mind. The physics are, by definition, repeatable and mathematically describable. If there is any creativity to speak of in a harmonographic drawing, it is due to the harmonograph’s inventor or assembler rather than the mechanical arms moving across the paper. The harmonograph would not be due royalties on its work, and cannot sue other harmonographs for copyright infringement. Like a camera, the harmonograph has no agency of its own; it can only be considered a medium.

This holds true for computers as well. However, computers are also a special class of machines. Computers are not only subject to their own deterministic mechanics, but they can also emulate the mechanics of any other machine, as long as those mechanics are mathematically describable. With some work, our harmonograph could be codified into a program, and the program, equipped with a basic physics engine, would produce drawings identical to the physical harmonograph’s. Like the harmonograph, the program is deterministic. It has no agency. It cannot transcend its own logic. Likewise, if a computer can produce a picture in the style of Mondrian, it is only because Mondrian created the template for that emulation. The computer is incapable of arriving at Neoplasticism on its own.

 

_ok_07 a-michael-noll_ninety-computer-generated-sinusoids_1965A. Michael Noll, Ninety computer-generated sinusoids with linearly increasing period, 1965. The top line of this picture was mathematically expressed as a sinusoid curve. The computer was then instructed to repeat the line 90 times. The result approximates closely Bridget Riley’s painting Current, 1964. From A. Michael Noll, “Computers and the Visual Art,” 1966, in Design Quarterly (Minneapolis: Walker Art Center, 1966). Photo: © A. Michael Noll

 

It is interesting, then, that Noll chose randomness as the quality by which Computer Composition would both distinguish itself from Mondrian as well as outdo Mondrian at his own game. In Computer Composition, all line placements were selected randomly. The widths could be between seven and ten scan lines; their lengths could be anywhere between ten and sixty points. (The scan lines and points describe the vertical and horizontal axes of the microfilm plotter’s cathode ray tube.) Any line falling inside the parabolic region at the top of the composition was shrunk in proportion to its distance from the parabola’s edges. With some trial and error, Noll could make a Mondrian whose line distribution was conceivably determined by the painter.

Noll believed that his study’s true subject wasn’t sociological taste—being educated enough to recognize a Mondrian or a fake—but how audiences responded to randomness. On the one hand, he wrote, enthusiasts of abstract painting not only tolerated compositional randomness, they sought it out. A more random Mondrian was a better Mondrian. Conversely, for those who were not familiar with computers or abstract painting, a more orderly, less random, picture was associated with a computer. Therefore, they guessed that the more orderly painting, the Mondrian, was made by a computer, and the more random painting was made by a human. Both agreed on one thing: randomness and creativity were linked. Both groups saw randomness as the one quality a computer could not achieve; and although they may not have known why, both groups were right.

 

III

From the beginning, artificial intelligence (AI) has been concerned with transcending the deterministic limitations of machines. This applies to all forms of decision making, let alone creativity. Although rule-based approaches to AI dominated the field for decades, it became apparent that programs were only as good as their informational ontologies. Most importantly, these programs had limited ability to learn; they could only travel through predefined decision trees. Rule-based AIs might produce competency, but they would never produce creativity.

 

The RAND Corporation, A Million Random Digits with 100,000 Normal Deviates (Santa Monica: RAND Corporation, 1955)The RAND Corporation, A Million Random Digits with 100,000 Normal Deviates (Santa Monica: RAND Corporation, 1955)

 

The distinction between competency and creativity can be found in AI’s founding document, the proposal for the 1956 Dartmouth Summer Research Project. In it, computer scientists such as John McCarthy, Marvin Minsky, and Claude Shannon outlined most areas of AI research for the next half century, including neural networks and natural language processing. Less remarked upon was their proposal for research into computer creativity, “Randomness and Creativity”:

“A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.”[3]

Almost simultaneously to the Dartmouth conference, in a very different part of society, artists were looking to randomness to break with traditional notions of creativity. At first, the two groups seem completely opposite: naive computer scientists expecting roulette wheels to generate Michelangelos, and avant-gardists trading in aesthetic refinement for aleatory anarchy. But the two have more in common than they may have thought. Artists and composers like John Cage, Jackson Pollock, Robert Motherwell, Morton Feldman, and Iannis Xenakis used randomness to undermine conscious thought. Consciousness, as they saw it, had ossified creativity. Creativity had become, in a word, deterministic.

 

IV

One year after the Dartmouth AI conference, the artist, writer, and scientist George Brecht wrote “Chance-Imagery,” an overview of chance operations in Modernist art. (Although written in 1957, Brecht did not publish the essay until 1966.) The essay drew equally on Brecht’s scientific knowledge as a research chemist and his developing interests as a conceptual artist. Unlike many of the Modernist manifestos it quotes, “Chance-Imagery” is a theoretically heterodox work. In it, Brecht mixes Freudian free association with D. T. Suzuki, Dadaism with thermodynamics, A Million Random Digits with Surrealist poetry.

Brecht divided the continent of chance into two territories. The first was automatism: streams of consciousness, snap judgments, nonrational associations. The second was mathematical and physical chance, from the roulette wheel to tables of random numbers. Both types of chance, for Brecht, were an escape from bias. As Marcel Raymond wrote, the unconscious does not lie. For André Breton, the unconscious was the factory of the marvelous. Jean Arp believed that chance granted him “spiritual insights.” The soaring rhetoric was common: modernist artists may not have been the first to use chance operations, but they were among the most ferocious in elevating it to almost religious heights. The “pioneer work,” according to Brecht, was Marcel Duchamp’s 3 Standard Stoppages. Made during 1913 and 1914, Duchamp—limiting his tools to wind, gravity, and aim—dropped one meter of thread from a height of one meter onto a blank canvas. He fixed the thread to the canvas, cut the canvas along the edge of the curved thread, and cut a piece of wood along one edge to match the curved thread. He then repeated the process two more times. Finally, the six pieces of glass and wood were fitted into a custom wooden box. The result, Duchamp said, was “canned chance.”

 

Marcel Duchamp, 3 stoppages étalon (3 Standard Stoppages), 1913-1914 (replica from 1964). Marcel Duchamp 1887-1968. © Succession Marcel Duchamp by SIAE, Rome, 2016. Photo: © Tate, London, 2016
Marcel Duchamp, 3 stoppages étalon (3 Standard Stoppages), 1913-1914 (replica from 1964). Marcel Duchamp 1887-1968. © Succession Marcel Duchamp by SIAE, Rome, 2016. Photo: © Tate, London, 2016

 

Duchamp would not be chance’s only 20th century canner. Chance would at least be printed and bound, too, as scientific researchers required greater and greater quantities of random numbers. Although randomness was in high demand in industry and academia, it was increasingly difficult to come by. If one needed thousands of random numbers, traditional techniques for generating them—coin tosses, dice rolls, et cetera—would be tedious. Even worse, researchers knew how biased many physical processes could be: a fair coin toss that does not favor one side or the other is harder than it looks. To meet demand, research institutes produced books of random numbers, many of which are still in use today. Brecht, whose day job for many years was as a professional chemist, knew this industry well, and his essay is one of the few to deal with the arts and chance that mentions the small industry of random number production. The best-known book was the RAND Corporation’s 1955 A Million Random Digits with 100,000 Normal Deviates. The publication is a kind of classic of the genre, with several hundred pages of random numerals ready to be selected by statisticians, pollsters, computer scientists, and other professions in need of aleatory input. Another book was the 1949 Interstate Commerce Commission et al.’s Table of 105,000 Random Decimal Digits. To produce it, the ICC used numerical data selected from waybills, a process Brecht compared to the methods of the Surrealist exquisite corpse. The RAND corporation book was generated using an electronic roulette wheel, a custom device created by Douglas Aircraft engineers that had little to do with an analog roulette wheel. RAND’s device used a noisy analog source—a random-frequency pulse generator—to generate the numbers. Both techniques had one thing in common: they did not rely on computational means to generate their random numbers, and that is because computers can never produce randomness. Contrary to the Dartmouth proposal, computers cannot “inject” randomness into thinking. They cannot produce a random question, if asked. In principle, they cannot fairly shuffle a virtual deck of cards. The reason for this has already been mentioned: a computer is only ever deterministic, and no deterministic operation can produce a random number. As the mathematician John von Neumann wrote:

“Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.”[4]

Sinful or not, computational randomness happens all the time. Online casinos shuffle virtual card decks millions of times a day. Every millisecond, cryptographic protocols use random numbers to encrypt computer-to-computer communications. How is this done? If computers are incapable of randomness, from what source does a computer generate randomness? One way of producing randomness computationally is to avoid arithmetical methods altogether. If one needs a random number, one must find it outside of a computer system. In some cases, casino software makes a call to a computer hooked up to a physical, truly random system such as atmospheric noise or radioactive decay. (Atmospheric noise is what Random.org, the modern successor to A Million Random Digits, uses.) However, although random physical systems guarantee true randomness, they can be slow and incapable of meeting high demand for random numbers. A second, more scalable though imperfect method is to use a pseudorandom number generator (PRNG.) A PRNG is a program that operates on von Neumann’s bad faith; it attempts to make a deterministic mathematical process produce an unexpected result. One of the first PRNGs, suggested by von Neumann himself, was the middle-square method. The middle-square method is almost useless for generating random numbers, but it does serve as an easy introduction to how a (poor) PRNG works. To use the middle-square method, first choose a four-digit number, square that number, take the middle four digits of the eight-digit result, and use those four digits as your random number. If you need another number, repeat the process starting with your new number. It is not hard to see how one might run into problems with this procedure, especially if the middle four numbers all turn out to be zero. Even worse, in only a few generations, it quickly becomes apparent that the middle-square method produces more of some numbers than others. The middle-square method, to use Brecht’s expression, is hopelessly biased.

So much for machines. But what about humans? Can a human produce random numbers? In other words, if a person were asked to come up with a long string of random numbers, if the person did not think and was asked to blurt out these numbers, perhaps operating under the spell of the unconscious, would every digit be as likely to occur as every other? According to Modernist artists, the answer should be “yes”—after all, chance operations, especially those of the Dadaist kind, were meant to free the artist from bias. As Brecht wrote of the Dadaists, if the unconscious is free from “parents, social custom and all the other artificial restrictions on intellectual freedom,” then it should be able to do exactly what a deterministic machine cannot do: generate randomness. [5]

Unfortunately, whatever the unconscious may be, it is not unbiased. It may be, in fact, more deterministic than conscious thought. Brecht, the professional chemist, knew that across scientific fields unconscious and reflexive behavior have been shown to be patterned, if not deterministic, offering no respite from social conditioning. Starting at the beginning of the century, Sigmund Freud’s The Psychopathology of Everyday Life (1901) suggested that the unconscious was a poor random number generator (“I have known for some time that one cannot make a number occur to one at one’s own free choice any more than a name.”) [6] Soon thereafter, Ivan Pavlov showed that the reflexes could be conditioned and deconditioned. By midcentury, as Brecht writes, statisticians proved that human test subjects showed bias even when selecting wheat plants for measurement. For scientists, the unconscious is deeply deterministic, more an automaton than automatic.

Claude Shannon, the father of information science, invented games and machines to illustrate this determinism. Shannon played one such game with his wife, Mary Elizabeth “Betty” Moore, also a mathematician. The game involved Shannon reading aloud to Moore one randomly selected letter from a detective novel. He then asked her to guess the next letter in the sentence. After telling her the right answer, she then guessed the following letter, which he revealed to her, and they worked their way through the book, letter by letter. Most probably she got the first letter of a word wrong, but as more of the word was revealed, the chances of guessing the next letter increased. The letter P might be followed by many different letters, but the string “probabl” will most likely be followed by an E or a Y. Letters tend to have statistically probable groupings, so if the letter was Q, Moore could almost be certain that the next letter was U. What Shannon was formalizing in his research was something we all know intuitively: English, like all languages, is statistically patterned, and we have internalized these patterns, though we might not be able to consciously apply numbers to the probabilities.

A second game, Mind-Reading Machine, required a custom-built mechanical device with a button and two lights. A player said the words “left” or “right” and pressed the button. The machine (built without sound inputs) then guessed which direction the player said by switching on either the left or right lights. The player registered whether or not the machine was correct. If the machine matched the player, then the machine won a point. If it was wrong, the human won a point. The machine was programmed with a simple algorithm to guess the player’s next choice, an algorithm inspired by the one hinted at in Edgar Allen Poe’s “Purloined Letter” (1844) for playing “odds and evens.” As long as the human did not guess the algorithm, the machine stood a good chance of guessing the player’s supposedly random choices. The Mind-Reading Machine, though primitive, showed that the unconscious was semiregular; it was not a fount of the unexpected. Its workings could, perhaps, even be mimicked by a very simple machine.

 

Claude Shannon’s Mind-Reading Machine, 1953, Codes & Clowns installation view at Heinz Nixdorf MuseumsForum, Paderborn, 2009. Object on loan from MIT Museum, Cambridge, MA. Photo: Jan BraunClaude Shannon’s Mind-Reading Machine, 1953, Codes & Clowns installation view at Heinz Nixdorf MuseumsForum, Paderborn, 2009. Object on loan from MIT Museum, Cambridge, MA. Photo: Jan Braun

 

V

Information is surprise, Claude Shannon wrote. Information is what we don’t expect, and information is what we don’t have. If we already have all information, there is no need for sending messages. If we already have the information in a message, the message is redundant. A redundant message may indicate more information—“If sent twice, the message is false”—or it may be an antidote to noise. Either way, if a message is redundant, it is expected. Redundancy can be reduced, compressed. Zero, zero, zero, zero… what comes next? With a random string, on the other hand, it is impossible to know what digit comes next. Randomness, then, is pure information. A random string is paradoxically full of information, more information than any English word. A random string, like information, is surprise. Chance, therefore, is also surprise. Pure chance can’t be reduced. It can’t be compressed. It can’t be anticipated. Chance is, as Jean Arp put it, the “deadly thunderbolt.”

When I read Arp’s phrase, I couldn’t help but think of Lee Sedol, 9-dan Go master who lost a five-match tournament this past March to Google DeepMind’s Go-playing program, AlphaGo. During the second match, AlphaGo made a move so unexpected and original—Move 37—that Lee left the tournament room in shock. He went on to lose the match. The strategy that AlphaGo built around Move 37 was not taken out of a database of publicly known moves. Move 37 was new to the 5,500-year history of Go. It belonged to a style of play that Go commentators sometimes called “inhuman” and “alien.” In the months that followed, AlphaGo continued to develop this alien style of play, and, according to Demis Hassabis, CEO of DeepMind, Lee has begun learning from the machine.

 

Claude Shannon, Mind-Reading Machine’s operating diagram (Bell Laboratories Memorandum, March 18, 1953)

Claude Shannon, Mind-Reading Machine’s operating diagram (Bell Laboratories Memorandum, March 18, 1953)

 

Let’s go back to our fiction about machines. It is untrue that machines cannot make other machines. From von Neumann replicators to self-assembling robots to the entire machine tools industry, there exists, both on paper and in reality, machines that can make other machines. It’s also untrue that machines cannot change functions. Computers are the best example of multipurpose machines. And as far as microbiology is concerned, there is no need to believe that life operates on any different physical principles than non-life. There remains, though, the question of determinism. Can a machine operate along non-deterministic lines? Asked another way, can a machine be more than the sum of its parts? Can it be creative? The answer is not so much to be found in randomness, in stochastic algorithms, but in learning. Learning—the ability to incorporate experience into decision making—is what helps separate deterministic and stochastic systems. Life may operate under the same physical laws as non-life, but with a crucial difference: it learns. One does not need consciousness, either—all an organism needs for learning is genetics. At its simplest, genetics represents a record of past successes. On a more complex level, immune systems remember infections and brains record memories. When an AI learns to play Go, when it creates new styles of play as AlphaGo did, it operates on principles abstracted from neuronal reinforcement. AlphaGo learned how to play Go, it invented new strategies of play, and now it is teaching those styles to Lee Sedol. When a system can learn, that system is no longer deterministic. It is adaptive, it is complex, and it is creative.

 

Piet Mondrian, Composition in line, second state, 1916-1917. © Collection Kröller-Müller Museum, Otterlo. Courtesy: Collection Kröller-Müller Museum, OtterloPiet Mondrian, Composition in line, second state, 1916-1917. © Collection Kröller-Müller Museum, Otterlo. Courtesy: Collection Kröller-Müller Museum, Otterlo

 

[1] To produce Computer Composition with Lines, Noll used a “microfilm plotter,” a cathode ray tube synchronized with a camera. The subjects were presented with photocopy reproductions of the plotter drawing and painting.
[2] Georges Canguilhem, “Machine and Organism,” trans. Stefanos Geroulanous & Daniela Ginsburg, in Knowledge of Life, ed. Paola Marrati and Todd Meyers (New York: Fordham University Press, 2008), p.88.
[3] http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
[4] George Dyson, Turing’s Cathedral (New York: Patheon Books, 2012) p. 197.
[5] http://www.artype.de/Sammlung/Bibliothek/b/brecht/brecht_chance.pdf p.6
[6] Sigmund Freud, The Psychopathology of Everyday Life (New York: W. W. Norton & Company, Inc, TKyear), p. 307. https://www.amazon.com/Psychopathology-Everyday-Standard-Complete-Psychological/dp/0393006115/ref=sr_1_2?ie=UTF8&qid=1473632696&sr=8-2&

 

 

John Menick is an artist, writer, and computer programmer. He lives in New York.

 

Originally published on Mousse 55 (October–November 2016)

Related Articles
ESSAYS
An Eight-Point Program for Fathoming German–Greek Relations. documenta 14: The View from Kassel
(Read more)
ESSAYS
Time Exists Differently Here: Vivian Suter and Elisabeth Wild
(Read more)
ESSAYS
The Globalized Museum? Decanonization as Method: A Reflection in Three Acts
(Read more)
ESSAYS
Body Party: Hannah Black
(Read more)
ESSAYS
A Quest Toward Thinking in Oceanic Ways
(Read more)
ESSAYS
Theory of the Minor
(Read more)