C-Arts Magazine, January 2009.
Ingmar Bergman once wrote that in an hour-long film there are 27 minutes of complete darkness, of space between film frames. “When I show a film I am guilty of deceit,” he continued. “I am using an apparatus that is constructed to take advantage of a certain human weakness, an apparatus with which I can sway my audience in a highly emotional manner—to laugh, scream with fright, smile, believe in fairy stories, become indignant, be shocked, be charmed, be carried away, or perhaps yawn with boredom. Thus I am either an impostor, or, in the case where the audience is willing to be taken in, a conjurer.”
Every conjurer, every artist, is also an imposter. It is the essence of art. But if a lie is the act of equating things that are not equal, of saying two equals three, then lies are the key to not just art, but all life. Truth is death.
The necessity of lies is not some poetic statement—except to the extent that all language is poetic—but rather something built into life as deeply as the earliest bacteria comparing nutritional gradients against the patterns codified in their DNA memory. The higher functions of our own brains are located in the neocortex, a stack of six layers with massive quantities of neural wiring between them. When light hits our eyes, it triggers rods and cones that pass on information to V1, where axons respond to the information. Different axons are triggered by a diagonal line than by a horizontal line, for example, and a pattern made up of their combinations is sent to V2, which aggregates them into, say, a human nose, then sends that up to V3, which recognizes a face, a head sticking into your home office, and so on up to V6, which tells you that your wife is mad at you, probably because you left your socks on the floor.
What puzzled neuroscientists for decades is that more information travels from the higher regions back down to the senses, from the brain to the eyes, than from the senses up. So, ironically, they ignored it. And yet that feedback is the reason that if my brother breaks his nose or my mother puts on a new hat, I’ll still recognize them, whereas a computer would not.
As we look out at an ocean of constantly shifting patterns, our senses send information up the cortical hierarchy and the oversized probabilistic pattern-prediction machine in our skulls sends back a constant stream with the names of patterns that it recognizes and, based on those patterns, predicts what the senses will see next (even a painting has a “next” because we see by virtue of saccades as our eyes jump around the canvas), how to classify inputs based on probabilities, and so on. Most of what we “see” is actually generated by our internal memory model, with just enough reality coming through to trigger the appropriate pattern. All stage magicians use this fact, all conjurers, and it is why we don’t see Bergman’s 27 minutes of darkness any more than we see the blind spot every human eye has in its field of vision.
It’s five o’clock, you hear the front door, you remember what happened last time you left your socks on the ground, and your brain sends a prediction to your eyes and ears telling them to fit any incoming perceptions into the pattern of wife plus argument. Unless what actually comes through the door is drastically different from what you expected, you don’t really need to see the nose, the face, the person—because your socks told your eyes what to see.
Skipping over a loved one’s face and seeing only the coming argument can be harmful, but it’s a mechanism without which we couldn’t function. Perhaps you can’t make out a handwritten letter by itself, but have no problem understanding it if you see the whole word. Or can’t make out a word, but understand it in the context of a sentence. That’s the feedback coming back down from the higher cortical regions to the lower sensory ones—memory plus stereotyping fills our gaps in perception. A person who doesn’t stereotype in this way isn’t a Zen monk; he’s dead, a computer, a stone.
But what are these stereotypes—neuroscientists call them “invariant representations”—that our minds are constantly building? They are not Platonic ideals, not essences or things-in-themselves. They are cyclical metaphors based on our relationships with things. When we say, as Nietzsche did, that “the stone is hard” we don’t pretend that there’s a Platonic “hardness” to which we are referring. “Hard” is what the stone is, and the stone is hard—or was the last time it hit our fingers and so probably will be again in the future. Of course, lots of other things are hard as well, and so the pattern is defined by more than just the stone, but in the aggregate the invariant representation we have of hardness is an approximation that we’ve created in the course of our lives.
If you ask most people how many fingers the average man has, they will say ten. A computer will give the mathematically correct average of 9.998, reminding us that there are three types of lies—lies, damn lies, and statistics. There’s an ugliness to the computer’s fictional average man and an elegance to the human being’s ability to forget the tattooed pinky-less Yakuza we met in 1994. That distinction, the computer’s inability to stereotype, to forget, to believe in a lie, is the reason behind the utter failure of artificial intelligence.
A newborn doesn’t have the concept of fingers. She has diagonal and vertical lines wiggling in front of her face. As we grow, we learn to package all “fingers” into one abstract concept while ignoring that each is so unique the police use fingerprints to identify criminals. We learn to forget all the differences that are not useful to us—the fingerprints, how hairy the knuckles are—and focus on those that are useful.
In other words, it is only by forgetting huge chunks of data and focusing solely, teleologically and selfishly on what’s useful to us that we can fit together countless similar but different situations, equate the unequal, and in the process create a “truth.”
To say that the average man has ten fingers is a lie not only in a mathematical sense, and not just because there is not a single real finger among the ten, but because the very concept of a finger is created by equating the unequal. And yet we would never consider the statement a lie, simply because it is not socially harmful. Throughout nature the physically weak use simulation to defend themselves against those with horns and fangs and muscle, and of all the animals humans are supreme in deception, flattery, cheating and acting. Among humans—from politicians to soccer players—lying is a weapon, so for the sake of living together society tries to dull its sharpest edges. But society couldn’t function without lies, and so what it banishes, what it labels as “lie” in the first place, are those lies that are harmful. (Forgetting the man with no pinky from 1994 doesn’t make “ten fingers” a lie, but if I forget the woman from 1995 can I still claim “I’ve never cheated on you?”) People don’t mind being deceived so long as they are not damaged by the deception. Similarly, few want pure knowledge, let alone destructive truths. What we want is the useful, power-giving and life-preserving consequences of truth.
I was struck by how truth damages thinking during the trial of OJ Simpson. I was in law school at the time, my criminal law professor that semester was one of Simpson’s attorneys, and it seemed clear that Simpson was guilty and that the police had also planted extra evidence. But even among law students, most who believed the ultimate truth of Simpson’s guilt refused to believe the intermediate truth that the police had lied. And those who believed the police lied were certain Simpson was innocent. The trial was never a battle between truth and lie—it was a battle about which truth the jury would focus on.
“What, then, is truth?” Nietzsche asks, and answers. “A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.
”There is no honest man. We know less of honesty than we know of fingers or hardness. What we know is a quantity of individualized and thus unequal actions which we equate by omitting the unequal and calling them honest actions. In the end, we distil from them a qualitas occulta with the name “honesty.”
This process of transferring nerve stimulus into images and on into concepts or invariant representations or stereotypes is inherently artistic. It is the essence of creativity—taking things that are different and fitting them together to form new patterns, whether they be hardness or honesty or the colour red as it shifts connotations from blood, to sacrifice, to the sacrifice of the proletariat, to communism to China. Every symbol is a constantly shifting metaphor, and yet we are always trying to freeze them in place; when we succeed in petrifying the concept into a truth—sometimes powerful enough to convince people to die for it, to bleed, to refresh the symbol—that is when the artist dies. Even a truth like “freedom” turns into a prison for the mind, into “freedom fries.” And yet most of us can only live with any sense of security about the world by living among truths, by forgetting that we are ourselves artistically creating subjects who form the world by calling equal that which is not.
Art is different. It is the essence of art to use these truths, symbols and metaphors as play-toys that make up new signs and symbols, as the essence of painting is to create fictional space. And the difference of the artist, if he is honest, is that he remembers he is constantly lying; and being aware of his lies, he has both the ability and the temptation to use his lies for their original purpose: as weapons. Most successful artists do injure with their lies. Some use them as fangs and claws, but those few who are truly great injure not their colleagues but their audience, and through it, perhaps, society.
When the public first saw the deformed women’s faces in Picasso’s Les Demoiselles d’Avignon, they were scandalized. Picasso had broken the truth of beauty. Now, like the face, the sock, and the 27 dark minutes, we look at Les Demoiselles and see only Les Demoiselles or Picasso or cubism or, most dangerously of all, “art.” In order to keep injuring the viewer, art has to be fleet, but when art itself becomes a truth, when it becomes a preset pattern that our brains push down to structure our senses, when art becomes like my socks on the ground, when it becomes a coin that has lost its picture and is now just a coin, then it can never move fast enough. True disruption becomes impossible. Art becomes a simulation of itself, a simulation of a simulation striving for market price because the viewer is immune.
Perhaps this is simply a return to art’s roots—art as artifice. But it’s also a shame. Because a lie that injures the viewer creates space, an opening that cracks that stiffened mummy and lets a little light in. A lie like that is truly beautiful.