There’s a remarkable video in which a couple of white guys, Belgians, I think, are hanging out in the rainforest waiting to make contact with a group of Papua New Guinea tribesmen. This is sometime in the late seventies. The tribesmen were apparently one of the last untouched peoples of the world.
Can you imagine? For better or worse the Westerners had about ten thousand years on the hunter–gatherers: ten thousand years using markets to allocate resources, ten thousand years using the written word to accumulate good ideas—enough to make them aliens, basically, visitors from an incomprehensible future.
It’s worth watching the video if only to see the look in the tribesmen’s eyes the first time they see and touch the Europeans, their camera, matches, mirror, tape recorder, knife—each simple object overthrowing in its own way a lifetime’s worth of thinking about how stuff works and what might be possible. Watch as the tribesmen sing and play instruments and then a moment later listen to the whole thing played back to them. Imagine what they must be thinking, what must be happening inside a mind that’s never heard (or even conceived of) recorded sound.
There’s something oddly endearing about the whole thing. Partly because it’s nice to see others marvel at our gizmos, ones as plain as matches and mirrors. But mostly I think it’s because we delight in delight, if only because genuine astonishment and scenes of seeming magic, of the kind a muggle might find at Hogwarts, are so uncommon in our world.
It’s worth asking why that might be: what is it about “our world” that makes delight, particularly among adults, so unusual? Is it that we’re short on magic? Are we jaded?
I think the first thing that ought to be said is that, yes, the world we live in isn’t magical. That’s the price of our Enlightenment: we know now there are physical laws that can’t be broken, like the laws of thermodynamics, which say that energy cannot be created or destroyed, and rule out so-called “perpetual motion machines,” and, in general, guarantee that there ain’t no such thing as a free lunch: that the universe on the whole doesn’t want to do interesting things—that on the whole it wants to decay into states of higher and higher “entropy,” the disorder of random heat.
So “interesting things,” like Harry Potter’s Golden Snitch, can’t be willed into action with a magic wand, because in the real world there are costs associated with snitch-like intelligence, dynamism, and precision, costs that stem, ultimately, from the overcoming of disorder by energy. Indeed, the closest you’d probably come to a real-life snitch is a hummingbird, whose snitch-like behavior is driven not by magic, but by biomechanical machines (brains, muscles, lungs, wings, etc.) a thousand times more sophisticated than anything humans have built, machines that must be fed constantly with high-energy foods lest they run out of molecular gas and die.
That is no doubt the source of magic’s appeal: the most wonderful things are simultaneously easy to imagine and implacably difficult to make real—which is like the dictionary definition of “tantalizing”—and magic cuts straight to the good stuff. Trouble is, in so doing it elides all the bits that matter, the work that would bring it to life. That’s what makes it fantasy—and why it leads so often to disappointment.
It doesn’t help that homo sapiens adapt with remarkable speed to novelty, so that even when our lawbound world does turn up something extraordinary, we get over it. Whizbang marvels like the iPad quickly degrade into tired units of electric furniture. Even that mirror, I’m sure, lost its luster among the tribesmen a few weeks down the road.
No wonder genuine astonishment is so hard to come by.
Luckily there’s another kind of wonder, one that won’t wear off so easily and that’ll never run out. A kind of wonder grounded in “hows,” not “whats.”
Let me explain what I mean.
Surely what was most exciting to the tribesmen as they encountered each of those modern objects was what the objects did. The match was exciting to them because it produced fire, the tape recorder because it captured sound, the knife because it cut with ease, and the mirror because it showed you you whenever you looked at it. Pretty phenomenal stuff. Far less exciting—at least in that moment—is the problem of how any of these things might work. Who cares? It would be like wondering how a UFO works as it hovers over your house.
I submit, though, that if wonder’s what you’re after, you’d do much better in the long run by attending more carefully to “hows” than “whats.” It’s the difference between a spark and a slow burn. If you can learn to get excited by finding out how, you can look forward to a lifetime of genuine astonishment, not just a handful of fantastic bursts. Of course it’ll be harder—it takes more work to understand the structure and mechanics of a thing than to merely use it—but that’s precisely why it’s better.
One might balk that this is exactly what’s dangerous about modernity’s scientific urge, that our instinct to “reduce” a phenomenon into parts is responsible for our collective disenchantment.Inresponse I’d lean on what the Nobel Prize-winning physicist Richard Feynman said in the first few minutes of a documentary appropriately called The Pleasure of Finding Things Out:
I have a friend who’s an artist, and he’s sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say, “Look how beautiful it is.” And I’ll agree. And he says, “You see, I as an artist can see how beautiful this is, but you as a scientist take this all apart, and it becomes a dull thing.”
And I think he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me, too. I believe—although I may not be quite as refined aesthetically as he is—that I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter—there’s also a beauty at smaller dimensions, in the inner structure…
The science knowledge only adds to the excitement, the mystery, and the awe of a flower. It only adds. I don’t understand how it subtracts.
Asking “how?” unearths riches when the question is directed at stuff that might otherwise be mere spectacle, like flowers, or hummingbirds, or those wimpy nighttime specks that turn out to be nuclear reactors and the source of life.
But perhaps the “how”–”what” divide is clearest not in nature, but in the instrumental objects of man: elevators, pencils, cell phones, refrigerators, and so on. These objects are created for their “whats.” They are meant to be used, not understood. Prying open a cell phone will even void its warranty—to interrogate its inner workings undermines the compact that brought it into being: “I will worry about how this works,” says its designer, “so that you don’t have to.”
As an object becomes more complicated the effect is only exacerbated. One is less and less able to ask “how?”, tempted, increasingly, to punt the question onto people paid to answer it.
Think, for example, of computers. As my college computer science professor put it, “Computers and computer networks are the most complex things human beings make (except for other human beings).” They’re built on a tower of abstractions whose depth and intricacy rivals that of the mammalian nervous system. The “hows” ought to be pouring out of us.
But they aren’t, really, and I think it’s because we are not so much concerned with how computers work as that they work. We want them to do stuff for us. Ours is a very “whatsy” relationship. Which is unfortunate, first because those “whats” no longer impress us all that much—we use these things daily; we have gotten used to them—and second, because the “hows” we care so little about are so much richer than we’d expect.
Take Microsoft Word. Word is like a typewriter with a handful of improvements: copying and pasting, fancy formatting, clean deleting. At the end of the day, it gives you a page of text.
It’s only natural to think that Word, which acts kind of like a typewriter, must work kind of like a typewriter. On a typewriter you strike a key attached by levers and springs (and sometimes an electric motor) to an arm, the end of which holds an inky letterform, which slams into a piece of paper and leaves a mark and turns a gear that notches the apparatus one character rightward. Perhaps something of that sort happens when you strike a computer keyboard?
If you thought so you wouldn’t be just the regular kind of wrong, but wrong like someone who thinks the world’s best paid billiards player is a sandwich. You’d be incredibly, incredibly wrong. And what’s worse is that you might stop thinking about it. You might conclude that a computer—at least the part of it that drives Microsoft Word—is about as exciting as a typewriter. You’d move on to other novelties.
In fact, the journey from keypress to computer screen is so wildly serpentine, and so unlike its mechanical counterpart, as to be nigh inscrutable. To get the details right would require at least one college course-worth of study; to really understand most of it would require years and years of professional mastery. And still no single person would be able to articulate the entire chain. It’s just too big. (And this is coming from someone who writes software for a living.)
If I had to compress the story somewhat, I’d tell you that it starts with the electrical signal that’s initiated when some metallic bit at the bottom of the key you’re pressing touches another metallic bit below it, and ends in beams of colored light emitting from your screen. Along the way it winds through two worlds, one called “hardware” and the other “software,” both composed of layers of special-purpose machines: transistors, registers, ALUs, switches, inductors, multiplexers, NAND gates, flip-flops, microcode, ROM, the CPU, assemblers, parsers, lexers, C, C++, C#, the window system, functions, classes, libraries, the screen buffer, the raster scan, and many, many more.
Each of these machines is abstracted from the next in the sense that the machine above can use themachinebelow without having to know, or care, about how it works. All that matters is that the lower machine outputs what the higher machine expects as input. Machine six doesn’t need to know anything about machine two—all it worries about is what it’s getting from five and what it’s giving to seven. One can tinker with and muck up and improve the internal workings of each machine independently so long as one preserves their pairwise interfaces. This is the only way to build a system as complicated as a computer: as a series of independent modules each serving a single purpose.
If you didn’t do things this way, every part of the system would have to “know” about how every other part worked. It would be like if the checkout lady at a stationery store, in order to sell a pencil, had to understand the minute details of wood pulp production and graphite mining. It would be like if a general had to speak with every foot soldier to implement a battle plan.
The division of labor—the breaking down of a problem into parts; the principle of abstraction—is a miraculous boon. Think of it from the bottom up: imagine that you have a sophisticated algorithm, something that took a handful of PhDs years to develop. Now imagine if that algorithm and all the work that went into it, all the art of composing simple parts in just the right way toward something vastly more complex, were somehow wrapped up in a neat little package, “black boxed” to the point where its operation could be taken for granted, its behavior understood, so that now it, too, could be considered a “simple part,” and reused, and composed with others into something even vaster.
In a computer this sort of thing happens millions of times over. To take just one of an endless supply of examples, consider a function called Reduce, part of a software tool for mathematicians called Mathematica. Reduce solves arbitrary equations or inequalities, stuff like “x² + y² < 1” (for which values of x and y is the inequality true?). Reduce is incredibly powerful, a full-fledged algebraic solver, the sort of thing that 7th graders dream about (and graduate students too). It can do more than any full program I’ve ever written or might hope to write. But all one has to do to use it, and reuse it, and reuse it—either on its own or as one of many modules in a larger program—is refer to it by name. One needn’t understand the thousands of pages of code that compose its inner workings, nor the math that makes them possible. All of that can be taken for granted—“abstracted” out of the picture.
In effect a programmer who calls on Reduce is like a general who says only, “Take ’em,” and watches the machinery of total war unfold.
Which is all a rather long way of pointing out that when you press a key on a computer you are not just turning white pixels black like a typewriter leaving ink on a page. Yours is a far grander endeavor. Indeed, according to that same professor of mine “the simple act of pressing a key results in somewhere between 5,000 and 50,000 instructions being executed at the lowest level.” You are setting in motion a rich economy of modular machines, a frantic electric dance.
The fact that it happens too quickly to notice and inside an opaque box is—like any real magic—at once a great achievement and a grave deception, all the more so in this case because it hides the world’s greatest metropolis of engineered activity. To use a computer without ever seeing any of that would be a bit like if Dr. Seuss’s Horton stumbled on that speck of dust in the Jungle of Nool and sneezed it away, never knowing that it was the home of Whoville, a tiny planet full of life.
A surprising “what” starts and ends with itself. If you watch a monkey ride an elephant, say, or listen to a supersonic jet fly overhead, you won’t be left much different than you were before the monkey mounted and the jet flew over.
What’s special about a “how” is the contribution it makes to your working stock of analogies, associations, metaphors, symbols, percepts, and words, the raw material of your thoughts and concepts. A deep dive of any kind, be it into the structure of a buffalo’s vascular system or the curve of binding energy, will change your mind, not in the lowly sense of tweaking your political tack but by literally reshaping the way you think. Changing synaptic connection strengths will rejigger the way associative chain reactions fire through your brain. New mental track will be laid down. Dormant motifs and themes will change what bubbles up to consciousness. You will get the urge to chase down whiffs of a new kind of question.
For example, in the course of writing the section you just read about computers, all sorts of images, phrases, excerpts, and ideas kept tempting my attention, as though whole networked halos of my mind suddenly lit up. Tracing the effect of a single computer keypress set my thoughts chain-reacting in a way not unlike the signal itself. My brain, temporarily immersed in a computer’s electromechanics, was primed to approach every question in that frame.
It was a fertile mood, a kind of nematic daydream, and, as if on purpose, it led me to a long-forgotten passage from a book I once read by Eric Baum, an AI researcher—one of those passages that might have never woken up had my mind not stirred it just so:
What metaphor then comes from, I suggest, is code reuse. When we understand a concept, it is because we have a module in our minds that corresponds to that concept.
The module knows how to do certain computations and to take certain actions, including passing the results of computations on to other modules. Metaphor is when such a module calls another module from another domain or simply reuses code from another module. Then we understand whatever the first module computes in terms of what the second module computes.
Consider, for example, the metaphor “time is money,” as reflected in English… we waste time, spend time, borrow time, save time, invest time in a project, and quit the project when some holdup costs us too much time. We budget time, run out of time, decide if something is worth our while…
What is happening here, I suggest, is that we have computational modules that are being reused in different contexts. For example, we understand “time is money” because we have a module for valuable resource management. This module, which is some computer code, allows us to reason about how to expend a variety of valuable resources: money, water, friendship, and so on. The circuitry in our brains implementing the code in our minds for understanding time either explicitly invokes circuitry for valuable resource management or contains circuitry (and hence code) that was, roughly speaking, copied from it.
I can see exactly why my mind dredged this up for me. I remember not quite understanding it the first time I read it—thinking that it might be a neat idea, but not quite having the context to chew it over. But here it was again, just as the related ideas of code reuse, abstraction, modularity, and complexity were becoming, as I tried to write about them, something of a minor preoccupation. That’s the beauty of “hows”: as they lead you to new information they simultaneously recast the old, as your brain reorients itself around the question.
Take this code reuse stuff. Baum’s odd proposition, that metaphors are like mental programs constantly reused, surfaced in the very moment I was most inclined to evaluate it, to imagine what such a thing might actually mean and draw out its consequences.
For instance, I now have the inclination to ask: if words are like snippets of code, then what happens when I speak to somebody? When I start chatting with a friend about hamburgers, say, am I somehow “executing software” “on” my interlocutor’s mind?
My thinking, now primed for the question, might go as follows: just as a programmer’s text instructions set in motion a million small cascades in software and hardware, all the way down to the shuttling about of electrons, so too does a spoken phrase kick off in another person’s brain a wave of electric signals, starting with those tiny hairs in their ear and bounding through their nerves and axons as a string of electric pulses triggered, here and there, by changing concentrations of ions across neuronal membranes.
Those pulses aren’t just pulses. We know that they must encode information of a kind that’s hard to quantify or understand, but critical to how my friend thinks and lives: the idea of a “hamburger,” the taste of it, the memories of every one he’s eaten. (Just as the state of millions of transistors encodes, in one way or another, all of the documents and actions I take on my computer.)
Thus when I say something aloud to a friend, the sounds I make manipulate layers of machinery just like the layers implicated in the journey from the pressed keyboard key “Q” to a mark in Microsoft Word. Here the trip is from the “hardware” of vibrating columns of air, ears, and temporal lobe, up to the “software” of words and concepts and feels. The main difference is that the layers in brain and body are many times more plentiful and complicated than the layers in a manmade computer.
What particularly tickles me about all this is what it implies about the very essay you’re reading. It implies that prose works like code. It implies that this very sentence, even, is instructing your brain and body to execute all sorts of programs: kinetic ones in your eyes and hands as they turn pages and scan blobs of ink; electric ones in your occipital lobe as those blobs are parsed into letters and words and sentences; abstract ones in your mind as those words become ideas. The machinery is yours, but these marks of mine are making it go.
I’m tempted to go on—I could go on and on—but then we might lose the point: that “whats” cauterize themselves, and lead to nowhere, and end in disappointment, where “hows” engage what you know and reveal what you don’t and unearth, with uncanny ease, the lodes of wonder due the student of the world.