A Better Question

by on November 29, 2011

0
Essays Issue 1 Nonfiction
Sally Scopa

If I had to compress the story somewhat, I’d tell you that it starts with the electrical signal that’s initiated when some metallic bit at the bottom of the key you’re pressing touches another metallic bit below it, and ends in beams of colored light emitting from your screen. Along the way it winds through two worlds, one called “hardware” and the other “software,” both composed of layers of special-purpose machines: transistors, registers, ALUs, switches, inductors, multiplexers, NAND gates, flip-flops, microcode, ROM, the CPU, assemblers, parsers, lexers, C, C++, C#, the window system, functions, classes, libraries, the screen buffer, the raster scan, and many, many more.

Each of these machines is abstracted from the next in the sense that the machine above can use themachinebelow without having to know, or care, about how it works. All that matters is that the lower machine outputs what the higher machine expects as input. Machine six doesn’t need to know anything about machine two—all it worries about is what it’s getting from five and what it’s giving to seven. One can tinker with and muck up and improve the internal workings of each machine independently so long as one preserves their pairwise interfaces. This is the only way to build a system as complicated as a computer: as a series of independent modules each serving a single purpose.

If you didn’t do things this way, every part of the system would have to “know” about how every other part worked. It would be like if the checkout lady at a stationery store, in order to sell a pencil, had to understand the minute details of wood pulp production and graphite mining. It would be like if a general had to speak with every foot soldier to implement a battle plan.

The division of labor—the breaking down of a problem into parts; the principle of abstraction—is a miraculous boon. Think of it from the bottom up: imagine that you have a sophisticated algorithm, something that took a handful of PhDs years to develop. Now imagine if that algorithm and all the work that went into it, all the art of composing simple parts in just the right way toward something vastly more complex, were somehow wrapped up in a neat little package, “black boxed” to the point where its operation could be taken for granted, its behavior understood, so that now it, too, could be considered a “simple part,” and reused, and composed with others into something even vaster.

In a computer this sort of thing happens millions of times over. To take just one of an endless supply of examples, consider a function called Reduce, part of a software tool for mathematicians called Mathematica. Reduce solves arbitrary equations or inequalities, stuff like “x² + y² < 1” (for which values of x and y is the inequality true?). Reduce is incredibly powerful, a full-fledged algebraic solver, the sort of thing that 7th graders dream about (and graduate students too). It can do more than any full program I’ve ever written or might hope to write. But all one has to do to use it, and reuse it, and reuse it—either on its own or as one of many modules in a larger program—is refer to it by name. One needn’t understand the thousands of pages of code that compose its inner workings, nor the math that makes them possible. All of that can be taken for granted—“abstracted” out of the picture.

In effect a programmer who calls on Reduce is like a general who says only, “Take ’em,” and watches the machinery of total war unfold.

Which is all a rather long way of pointing out that when you press a key on a computer you are not just turning white pixels black like a typewriter leaving ink on a page. Yours is a far grander endeavor. Indeed, according to that same professor of mine “the simple act of pressing a key results in somewhere between 5,000 and 50,000 instructions being executed at the lowest level.” You are setting in motion a rich economy of modular machines, a frantic electric dance.

The fact that it happens too quickly to notice and inside an opaque box is—like any real magic—at once a great achievement and a grave deception, all the more so in this case because it hides the world’s greatest metropolis of engineered activity. To use a computer without ever seeing any of that would be a bit like if Dr. Seuss’s Horton stumbled on that speck of dust in the Jungle of Nool and sneezed it away, never knowing that it was the home of Whoville, a tiny planet full of life.

 

A surprising “what” starts and ends with itself. If you watch a monkey ride an elephant, say, or listen to a supersonic jet fly overhead, you won’t be left much different than you were before the monkey mounted and the jet flew over.

What’s special about a “how” is the contribution it makes to your working stock of analogies, associations, metaphors, symbols, percepts, and words, the raw material of your thoughts and concepts. A deep dive of any kind, be it into the structure of a buffalo’s vascular system or the curve of binding energy, will change your mind, not in the lowly sense of tweaking your political tack but by literally reshaping the way you think. Changing synaptic connection strengths will rejigger the way associative chain reactions fire through your brain. New mental track will be laid down. Dormant motifs and themes will change what bubbles up to consciousness. You will get the urge to chase down whiffs of a new kind of question.

For example, in the course of writing the section you just read about computers, all sorts of images, phrases, excerpts, and ideas kept tempting my attention, as though whole networked halos of my mind suddenly lit up. Tracing the effect of a single computer keypress set my thoughts chain-reacting in a way not unlike the signal itself. My brain, temporarily immersed in a computer’s electromechanics, was primed to approach every question in that frame.
It was a fertile mood, a kind of nematic daydream, and, as if on purpose, it led me to a long-forgotten passage from a book I once read by Eric Baum, an AI researcher—one of those passages that might have never woken up had my mind not stirred it just so:

What metaphor then comes from, I suggest, is code reuse. When we understand a concept, it is because we have a module in our minds that corresponds to that concept.
The module knows how to do certain computations and to take certain actions, including passing the results of computations on to other modules. Metaphor is when such a module calls another module from another domain or simply reuses code from another module. Then we understand whatever the first module computes in terms of what the second module computes.
Consider, for example, the metaphor “time is money,” as reflected in English… we waste time, spend time, borrow time, save time, invest time in a project, and quit the project when some holdup costs us too much time. We budget time, run out of time, decide if something is worth our while
What is happening here, I suggest, is that we have computational modules that are being reused in different contexts. For example, we understand “time is money” because we have a module for valuable resource management. This module, which is some computer code, allows us to reason about how to expend a variety of valuable resources: money, water, friendship, and so on. The circuitry in our brains implementing the code in our minds for understanding time either explicitly invokes circuitry for valuable resource management or contains circuitry (and hence code) that was, roughly speaking, copied from it.

I can see exactly why my mind dredged this up for me. I remember not quite understanding it the first time I read it—thinking that it might be a neat idea, but not quite having the context to chew it over. But here it was again, just as the related ideas of code reuse, abstraction, modularity, and complexity were becoming, as I tried to write about them, something of a minor preoccupation. That’s the beauty of “hows”: as they lead you to new information they simultaneously recast the old, as your brain reorients itself around the question.

Take this code reuse stuff. Baum’s odd proposition, that metaphors are like mental programs constantly reused, surfaced in the very moment I was most inclined to evaluate it, to imagine what such a thing might actually mean and draw out its consequences.

For instance, I now have the inclination to ask: if words are like snippets of code, then what happens when I speak to somebody? When I start chatting with a friend about hamburgers, say, am I somehow “executing software” “on” my interlocutor’s mind?

My thinking, now primed for the question, might go as follows: just as a programmer’s text instructions set in motion a million small cascades in software and hardware, all the way down to the shuttling about of electrons, so too does a spoken phrase kick off in another person’s brain a wave of electric signals, starting with those tiny hairs in their ear and bounding through their nerves and axons as a string of electric pulses triggered, here and there, by changing concentrations of ions across neuronal membranes.

Those pulses aren’t just pulses. We know that they must encode information of a kind that’s hard to quantify or understand, but critical to how my friend thinks and lives: the idea of a “hamburger,” the taste of it, the memories of every one he’s eaten. (Just as the state of millions of transistors encodes, in one way or another, all of the documents and actions I take on my computer.)

Thus when I say something aloud to a friend, the sounds I make manipulate layers of machinery just like the layers implicated in the journey from the pressed keyboard key “Q” to a mark in Microsoft Word. Here the trip is from the “hardware” of vibrating columns of air, ears, and temporal lobe, up to the “software” of words and concepts and feels. The main difference is that the layers in brain and body are many times more plentiful and complicated than the layers in a manmade computer.
What particularly tickles me about all this is what it implies about the very essay you’re reading. It implies that prose works like code. It implies that this very sentence, even, is instructing your brain and body to execute all sorts of programs: kinetic ones in your eyes and hands as they turn pages and scan blobs of ink; electric ones in your occipital lobe as those blobs are parsed into letters and words and sentences; abstract ones in your mind as those words become ideas. The machinery is yours, but these marks of mine are making it go.

I’m tempted to go on—I could go on and on—but then we might lose the point: that “whats” cauterize themselves, and lead to nowhere, and end in disappointment, where “hows” engage what you know and reveal what you don’t and unearth, with uncanny ease, the lodes of wonder due the student of the world.

Page 4 of 41234

Responses to A Better Question

0 responses

Add Your Response: