Defining it, however, is a bit harder. Ask anyone, even professional researchers who are supposed to know about such things (and therefore be very intelligent themselves) and you’ll get a very disparate set of views.

The question of intelligence is coming to the fore because we want to make our machines intelligent (presumably that would give us license to relax and be a bit dumber). It’s an incredibly hard problem. To solve it we will need to solve the intelligence problem. There has been some progress, but we’ve still got a long road ahead.


The intelligence quotient

It’s hard to think about intelligence without taking into account IQ tests. They have a relatively long history, some say going back to imperial China, but they certainly at least to the 19th century. Today, we all take them at an early age to determine our aptitude.

They are, of course, controversial. They are very good at identifying problems with intelligence, but fall short at predicting outstanding ability. Richard Feynman, one of the great geniuses of the 20th century had an IQ of 125, above average but by no means unusual. It seems that once you get above a certain level, you’re smart enough to do just about anything.

A comprehensive report by the American Psychological Association, IQ scores were shown to have a 30%-50% correlation with professional success, so they are important but not determinant. Other factors, associated with emotional intelligence, appear to be, if anything, more decisive.

Much of the research, (including the report noted above) indicates that intelligence is highly inherited and becomes even more so as we age, so fawning before nursery school admission committees is most likely a waste time. However, the good news is that we’re all getting more intelligent every generation, so if you feel smarter than your parents, you probably are.


The turing test

Another way of looking at intelligence is the Turing test, devised by the English computer genius Alan Turing in 1950 and was supposed to distinguish between humans and machines. It was devilishly simple. You would simply have a human judge converse with both machines and people and see if the judge could reliably tell the difference.

Alas, it turned out that the Turing test was relatively easy to pass. One program, called ELIZA, was able to fool people as early as 1966. Another, named PARRY was stumped even experienced psychologists.

Computers seem to be able to do some tasks as well as or even better than humans. Cheap computer programs are able to play chess as well as human professionals. We trust algorithms on sites like Google and Amazon to recommend things to us, sometimes even more than human experts.

The problem is that a computer that can pass a Turing test in one kind of task will fail miserably on another. We might trust Amazon to choose a book for us, but not a mate. Google maps are great for giving us directions to a friend’s house, but would be at a loss if we asked it who might be fun to visit.


Artificial intelligence

The field of Artificial Intelligence sprung forth in 1956 from a series of conferences at Dartmouth which included such luminaries such as Marvin Minsky, still a leading thinker in the field and Claude Shannon, the father of Information Theory. There was great enthusiasm and they figured they would have the problem licked within 20 years.

Alas, it was not to be. By the 1970’s Artificial Intelligence hit hard times and entered a long “winter” in which governments refused to fund further research. Interest waned and very little progress was made until the late 1990’s, when important advances were made, most famously Deep Blue’s defeat of Gary Kasparov in a chess match held in 1997.

Today, intelligent machines permeate daily life, from robots in factories to customer service agents to transmissions in cars to video games. We rarely see it at work, but it’s there, under the hood. making our lives run a bit more smoothly. Yet, even with the enormous progress, we’re still a long way from what was envisioned in 1956.


Heuristics and chunking

As this Wired article shows, computers think much differently than you and I. They can perform tasks according to rules, called heuristics, extremely quickly. They sort through millions of possibilities in a matter of seconds and choose the single one that best fits a set of parameters. This makes it possible to optimize a computer to do a some jobs much better than a human ever could.

We, on the other hand, have very slow processors. While computer chips operate at the speed of light, our neurons run at the tortoise pace of about 200 miles per hour. Yet, we do have an important advantage. Our billions of organic processors can run in parallel, allowing us to excel at pattern recognition.

Psychologists call this chunking and it’s what accounts for high caliber human abilities. Even something as simple as catching a ball requires very complex multivariate computations, yet we do it with ease, on the run and with the sun in our eyes. Computerized robots have trouble navigating an ordinary room.

Moreover, our brains are constantly rewiring themselves as we gain experience. As a CEO, Jack Welch was famous for his ability to analyze financial statements at a glance, yet no one noticed this ability as he was coming up. His engineering degree had optimized him for other activities and it was only after years as a manager that he gained his financial acumen.


As smart as we want to be

When you think about it we’re really lucky. Our brains are incredible intelligence machines. They are, as neurologist Antonio Damasio notes, able to internalize information even before we are consciously aware that we possess it and can learn new abilities as we come to need them. We’re optimized for adaptation.

The catch is that in order to learn, we need to experience failure, fear and pain. Not all experiences are equal. Emotional ones trigger the release of hormones that promote synapse building and memory. Then we need to repeat the unpleasant activity to strengthen those new synaptic connections. The result is what we call expertise.

So it’s not enough to simply observe. We must, to paraphrase Teddy Roosevelt, get our faces marred with dust and sweat and blood, strive valiantly, err and come short again and again. It’s a difficult business and it will be a long time before we can design computers to simulate it.

Intelligence is, after all, infinitely more than processing.

Greg Satell is a blogger and a consultant at the Americal online media Digital Tonto. You can read his blog entries at http://www.digitaltonto.com