Where a calculator on
the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in
the future may have only 1,000 vacuum tubes and perhaps weigh 1.5 tons. – Popular
Mechanics, March 1949
Integrated circuits
will lead to such wonders as home computers -- or at least terminals connected
to a central computer -- automatic controls for automobiles, and personal
portable communications equipment. – Gordon
Moore, 1965
For centuries,
explorers have searched the world for the fountain of youth. Today’s
billionaires believe they can create it, using technology and data. -- Ariana
Eunjung Cha, April 4, 2015
The phrase “hit a wall” in idiomatic English means to reach
a point where progress stops or slows significantly. Engineers don’t warm up to that phrase very
easily. For an engineer a wall is a
challenge, more like a speed bump. They throw their hands up only if they’re
being robbed – maybe. They have little
use for the word “impossible.”
Let’s say you’ve written a book on genetics, and as a way of
getting would-be readers pumped up you decide to store every word and illustration
in your book – table of contents, index, everything -- in DNA. And not only store it, but retrieve it as well. And to get your point across about the
immenseness of DNA storage someone suggests you write 70,000,000,000 (70
billion) copies of the book in DNA. Impossible? Of course not. Geneticist George
Church did it three years ago.
On April 19, 1965 an engineer named Gordon Moore published
an article in which he noted something remarkable about integrated circuits
and their components. Often referred to
as a chip, an integrated
circuit is a microelectronic device consisting of many interconnected
transistors and other components fabricated on a semiconductor wafer, usually
silicon. Without integrated circuits we
would have no smartphones, tablets, PCs, Macs, and countless other electronic
devices. “For simple circuits,” Moore wrote,
“the cost per component is nearly inversely proportional to the number of
components.”
This was the engineering equivalent of striking gold. With more components not only did you get
increased performance (since electrons have a shorter commute), but the cost
per component decreased -- at least up to a point. At the time, chips contained only a handful of
components, yet Moore predicted an exponential trend was underway, so that by
1975 “the number of components per integrated circuit for minimum cost will be
65,000.”
This was a bold extrapolation on Moore’s part. Prior to using integrated circuits
transistors and other components were wired together by hand on a circuit
board. A painstaking process, but one
that, early on, was less expensive than producing integrated chips. In 1965 only a few companies were making
integrated circuits, and their customers were mostly NASA and the U.S.
military. Add to this the fact that only
about 10-20 percent of the transistors actually worked, according to Moore’s
recollection.
Yet by 1975 Intel, the company he co-founded in 1968, was
preparing to market a memory chip with 32,000 components, leaving Moore’s
original estimate off only by a factor of two.
This progression of a doubling of transistor density every
year, later revised to every two years, became known as Moore’s Law. Higher densities meant increased performance,
more useful technology. Higher densities
drove component cost down, deflating retail prices and attracting more
consumers.
Bolstered
by other technological innovations, 50 years of Moore’s Law has brought us
the Digital Age, as Moore predicted. More
precisely, engineers, project managers, and entrepreneurs have innovated,
invested and generally dedicated their lives to keep Moore’s Law going. And consumers eagerly line up for the latest
offerings.
How far have we come since Moore wrote his article? Today’s integrated circuits contain billions
of transistors and are fabricated in factories that run in the billions of
dollars to build. Yet the cost per
transistor has dropped
from about $30 in 1965 (in 2015 dollars) to an infinitesimal amount today.
In 2014, semiconductor production facilities made some 250
billion billion (250 x 10^18) transistors. This was, literally, production on
an astronomical scale. Every second of that year, on average, 8 trillion
transistors were produced. That figure is about 25 times the number
of stars in the Milky Way and some 75 times the number
of galaxies in the known universe.
Remember, that’s 8 trillion
transistors for every second of 2014. The flood of transistors has “been the
ever-rising tide that has not only lifted all boats but also enabled us to make
fantastic and entirely new kinds of boats.”
Though there have been predictions in the past of the
imminent demise of Moore’s Law, engineers were able to find ”ways around what
we thought were going to be pretty hard stops,” as
Moore stated in a recent interview.
But ultimately, there are the fundamental limits of the known world: the
speed of light and the atomic nature of materials.
As Ray Kurzweil has pointed out frequently and which a
surprising number of researchers seem to forget, Moore’s Law is a computing paradigm
and is the fifth such paradigm since the 1890 census. All five paradigms have shown exponential
growth. The first four –
electromechanical, relay, vacuum tube, and transistor – eventually lost steam
and were superseded by a technology previously found only in niche markets,
such as the military, or that languished in sparsely-funded research labs. As a paradigm slows -- no longer progresses
exponentially -- more dollars are spent on developing the most promising
technologies to replace it.
Going 3D
The semiconductor industry is running out of tricks to keep
shrinking silicon transistors. Right now
the smallest size is 14 nanometers, and by 2020 they will need to be five
nanometers to keep pace with Moore’s Law.
Is this a wall approaching?
Not
to Samsung. They’re already building
flash memory chips using three-dimensional integrated circuits to achieve
performance gains. Instead of piling
transistors side-by-side on a plane of silicon, they’re stacking them, taking
up half the space of planar chips. They’re
building hi-rise condos instead of subdivisions with smaller and smaller houses. More layers means better performance and no
more shrinking. No longer will they have
to retrofit multibillion-dollar factories to produce the latest chips.
IBM
is taking a different approach, at least for now. They’re developing
transistors built with carbon nanotubes instead of silicon, which they hope to
have ready for mass production by 2020. At two nanometers in diameter, the
nanotubes, which
though seamless resemble rolled up chicken wire, could continue the pace of
cramming more transistors onto a silicon substrate. Based on simulations, the nanotube
transistors are about five times as fast as ones made from silicon.
In his magnum
opus Kurzweil cites the work of Peter Burke, University of California/Irvine,
who demonstrated nanotube circuits operating at 2.5 gigahertz (2.5 GHz). However, in a peer-reviewed article Burke
claimed the theoretical speed limit of these nanotube transistors should be
measured in terahertz, where 1 THz equals 1,000 GHz. (What a boost that would be to my 2.66 GHz MacBook
Pro!)
The biggest problem is positioning the nanotubes closely
enough together on the chip. IBM’s
preferred approach is to label the substrate and nanotubes with a compound “that
would cause them to self-assemble into position.”
Self-assembly of nanoscale circuits would be a world-changer. As Kurzweil notes, citing the work of
researchers at UCLA, having “potentially trillions of circuit components organize
themselves, rather than be painstakingly assembled in a top-down process, would
enable large-scale circuits to be created in test tubes rather than in
multibillion-dollar factories, using chemistry rather than lithography.”
Creating nanocircuits in chemistry flasks “will be another
important step in the decentralization of our industrial infrastructure and
will maintain the
law of accelerating returns through this century and beyond.”
In two decades robots
will be in charge
One of the reasons for continuing the exponential
development of technology is to eventually turn the task over to robots. They will eventually graduate and take charge
of R&D. In the 2020s, working with
advanced hardware and computational strategies, researchers will make major progress in emulating
the human brain. By 2029 a computer will
be able to pass itself off as human under competent interrogation during a
Turing Test.
Meanwhile, the inexorable march of miniaturization will make
its way into our bodies, including our brains. Nanobots the size of a red blood cell will
enter our bloodstream and augment our intelligence, combining the pattern
recognition power of our biological brains with the speed, capacity, and
knowledge-sharing ability of our technology.
This should get underway sometime in the 2020s.
By the 2030s the nonbiological portion of our intelligence
will predominate. Somewhere around 2045
“the pace of technological change will be so rapid, its impact so deep, that
human life will be irreversibly transformed.”
Kurzweil calls this period the Singularity.
AI researcher Ben Goertzel thinks the Singularity could arrive much sooner – in 10 years.
AI researcher Ben Goertzel thinks the Singularity could arrive much sooner – in 10 years.
Technology will continue to empower us with better and
cheaper products some of which will give us the ability to make better and cheaper products. Keynesianism and its Free Lunch
Institute known as the welfare state will gradually kill off banks and their governments as we've known them. Our overlords will not go quietly but there
is a better future ahead.
As a hint of that better future I offer this sampling of
encouraging developments:
1.
Surgeon
Anthony Atala demonstrates an early-stage experiment that could someday
solve the organ-donor problem: a 3D printer that uses living cells to output a
transplantable kidney. Using similar technology, Dr. Atala's young patient Luke
Massella received an engineered bladder 10 years ago; we meet him onstage.
2.
Just like his beloved grandfather, Avi
Reichental is a maker of things. The difference is, now he can use 3D
printers to make almost anything, out of almost any material. Reichental tours
us through the possibilities of 3D printing, for everything from printed candy
to highly custom sneakers.
3.
3D printing has grown in sophistication since
the late 1970s; TED
Fellow Skylar Tibbits is shaping the next development, which he calls 4D
printing, where the fourth dimension is time. This emerging technology will
allow us to print objects that then reshape themselves or self-assemble over
time. Think: a printed cube that folds before your eyes, or a printed pipe able
to sense the need to expand or contract.
4.
What we think of as 3D printing, says Joseph
DeSimone, is really just 2D printing over and over ... slowly. Onstage at
TED2015, he unveils a
bold new technique — inspired, yes, by Terminator 2 — that's 25 to 100
times faster, and creates smooth, strong parts. Could it finally help to
fulfill the tremendous promise of 3D printing?
Source: The Singularity is Near, p. 66
The chart shows the
price-performance of forty-nine computational systems in the 20th
century, measured by instructions per second per thousand constant dollars. (A rising straight line on a log chart
indicates exponential growth.)