Futuristic buzzwords like the social web, strong AI, and the web of things have
become fashionable to talk about as hot new trends, but our digital
future will most likely be far stranger and more wonderful than anything
we can imagine today.

The truth is that new paradigms arise from novel solutions to old
problems.  Those solutions in turn have consequences that are both
unforeseen and unintended.  Much of today’s technology began as
investigations into obscure curiosities in first-order logic, radio
communication and the like.  The seeds of the next wave will be just as
improbable.

A Hole at the Center of Logic

Logic was first developed by Aristotle
in ancient times and survived for over 2000 years without any
significant augmentation or alteration.  It was one of those things,
much like the ebb and flow of the tides or the rise and setting of the
sun, that you simply accepted. You didn’t question it anymore than you
would question day turning into night.

However, by the mid 19th century, the seams began to show.  People like Cantor and Frege noticed some inconsistencies and tried to patch them up, but then Bertrand Russell
showed that the hole was deeper and more fundamental than anyone
thought.  The problem can be summarized with the following phrase:

The barber of Siberia shaves every man that doesn’t shave himself.

This is known as Russell’s paradox
and it devilishly difficult to deal with because it points to a
instance in which a proposition can be considered to be both true and
not true.  The more people looked, the more they found similar examples
of Russell’s paradox littered throughout logic and mathematics, leading
to a foundational crisis.

The whole thing was, of course, ridiculous.  It was almost as if a
riddle in a crossword puzzle led physicists to question gravity.
 Everybody knows that 2+2 equals 4 and always will.  Surely, that same
principle must apply throughout mathematics and logic?  It was just a
matter of constructing the rules of the system correctly.  Wasn’t it?

Hilbert’s Program

As seemingly trivial as the situation was, nobody could clear it up,
no matter how hard they tried. There were meetings and debates, lots of
hand wringing and guffawing, but ultimately, no real answer. Finally
in 1928, David Hilbert, the most prominent mathematician at the time, set forth a program that would resolve the crises.

It largely centered around three basic principles to be proved:

Completeness: In logical system, every statement can be either proved or disproved by the rules of that system.

Consistency: Every statement is either true or not true.  (i.e. If 2+2=4, it can never be shown that 2+2=5, it will always be 2+2=4).

Computability: For any assertion, there will always be an algorithm that can prove the statement true or false (also called decidability).

He didn’t have to wait long for an answer to the first two questions.
 Unfortunately, it wasn’t the answer he was hoping for.  In 1931, 25
year-old Kurt Gödel published his incompleteness theorems, which showed that every system is either incomplete or inconsistent.  None could satisfy both conditions.

The hole at the center of logic was just something that everyone
would have to learn to live with. Logical systems, no matter how
they’re set up, are fundamentally flawed.

The Universal Computer

Gödel’s paper was not conjecture, but a proof.  In a very real sense,
he used logic to kill logic.  In order to do so, he came up with an
innovative new tool called Gödel numbering.
The idea was that statements would be encoded into values, which could
then be combined with other assertions encoded the same way.

His method was then utilized, almost simultaneously but independently, a few years later by Alonzo Church and Alan Turing, to answer the question of computability.  Much like Gödel, they found the answer to Hilbert’s question to be negative.  Some values simply can’t be calculated.

Turing’s method also had an interesting byproduct, the Turing machine (a working model
was recently featured as a Google Doodle), which could perform any
computable sequence using elementary symbols and processes.  This was
the first time anybody had seriously thought of anything resembling a
modern computer.  Turing would write in 1948:

We do not need to have an infinity of different machines doing different jobs. A single one will suffice. The engineering problem of producing various machines for various jobs is replaced by the office work of ‘programming’ the universal machine to do these jobs.

That, in essence, is what a modern computer is – a universal machine.
 If we want to write a document, prepare a budget or play a game, we
don’t switch machines, but software.

Alas, at the time it was just a figment in Turing’s imagination.  To
be practical, a machine would need to calculate at speeds considered
incredible at the time and there was also the problem of encoding
instructions in a way that could reliably processed and then displayed
in a fashion that humans can understand.

The Zero Year of 1948

After a long and difficult gestation period, the digital world was finally born at Bell Labs in 1948.  First came the transistor, invented by John Bardeen, William Shockley and Walter Brattain, which had the potential to compute at the speeds necessary to build a practically useful Turing machine.

Next was Claude Shannon’s creation of information theory.
 At the core of the idea was the separation of information and content.
 To get an idea of what he meant, take a look at the QR code at the top
of the page.  It surely contains information, but not necessarily
content.

Nevertheless, that content (a must-see video of Tim Berners-Lee explaining his vision for the next Web) can be unlocked using the code.

However, the main achievement was that he showed how any kind of information can be encoded into binary digits or bits.
 That information could be content, but it could also other types of
encoding, like redundancies to make messages more reliable or
compression codes to make them smaller and more efficient (all modern
communications use both).

It was information theory, along with Shannon’s earlier work that showed how Boolean algebra could be transformed through mechanical means into logic gates,
that made the information age possible.  Yet there still remained one
last obstacle to be overcome before the digital world could become
operational.

The Tyranny of Numbers

For all of the genius that went into creating the theoretical basis
of modern computing, there remained a very serious practical problem
known as the tyranny of numbers.

Complicated electronic devices require thousands of logic gates, each
containing several transistors along with several other electrical
components.  Connecting and soldering each one by hand is an invitation
to disaster.  One defect can render the whole thing useless.

The solution was uncovered by Jack Kilby and Robert Noyce, who both independently proposed that all of the necessary elements of an integrated circuit
could be etched onto a single silicon chip.  The loss of performance
from using one suboptimal material would be surpassed by the
efficiencies won by overcoming the tyranny of numbers.

Today, the company Robert Noyce would help found, Intel, squeezes billions of transistors onto those chips, making Turing’s machine universal in more ways than one.

Practical Nonsense

All of the developments that led to modern computing had one thing in
common – they were all considered useless to practically minded people
at the time.  The hole in logic, Hilbert’s program, information theory
and even the integrated circuit went almost unnoticed by most people
(even specialists) at the time.

In a very similar vein, many of the questions that will determine our digital future seem far from matters at hand.  What do fireflies, heart attacks and flu outbreaks have to do with Facebook campaigns?  What do we mean when we speak of intelligence?  What does a traveling salesman have to do with our economic future?

What is practical and what is nonsense is often a matter not of
merit, but one of time and place.  Our digital future will be just
as improbable as our digital past.

As I explained in an earlier post, our present digital paradigm will come to an end somewhere around 2020.
 What comes after will be far stranger and more wonderful than that
which has come before and will come upon us at an exponentially faster
pace.  A century of advancement will be achieved in decades and then in
years.

They key to moving forward is to understand that as far as we have come we are, in truth, just getting started.  The fundamental problem is not one of mere engineering, but a sense of wonder and a joy in the discovery of basic principles or, as Richard Feynman put it, the pleasure of finding things out.

– Greg