Saturday, December 20, 2014

Counterfeiting is the health of those who rule us

I found this re-posted on a Facebook page today, having originated from End the Fed


This is not so much posing a question as making a point.  Nevertheless, how might a contemporary economist (i.e., an inflationist) address such a query, assuming it was serious?  Here’s how I imagine she might go about it in an offhand manner.

In the US, counterfeiting is illegal only when some person or organization other than the Federal Reserve Open Market Committee practices it.  When politically-appointed bureaucrats print money, it's not only legal it's essential to the health of the economy.  If you don't believe me read the textbooks.  It turns out that what’s true on a micro level — that counterfeiting is theft — isn’t true on the macro level.  Only those committed to the principles of the Austrian school would disagree.  And the world today little resembles the teachings of the Austrian school.

Without the Fed and its fiat machine we would have to rely on the market to provide the money necessary to maintain the division of labor.  The market, that wild beast forever in need of government restraint, would control the most critical element of the economy.  Consider: The Fed-less market of the 19th century brought us price deflation.  Price deflation -- lower prices -- the ultimate nightmare.  Far better to have the spotless bureaucrats of the Committee debase the currency so that goods keep rising in price.  Chairman Greenspan once reminded us of the overwhelming success they've had driving the “price level” through the roof.  As any poor person will assure you, having one’s money buy less is a godsend.

Shhh.  Listen.  That tap-tap-tap you hear is a banker probing in the darkness of his empty vault.  That could be the big banks' fate if the Greenspan - Bernanke - Yellen combo couldn't make unlimited amounts of money snap into existence.  How could the federal government ever keep the economy from imploding and shore-up our inalienable entitlements if the big banks dried up?  If it had to rely on greedy taxpayers for all its funding and borrowing?  If it couldn't swap its IOUs for freshly minted digits?  If the Fed couldn't tap its computer to create those digits?

In other words, if it couldn’t counterfeit?

The Federal Reserve Act was signed into law by Thomas Woodrow Wilson, 28th president of the United States, on December 23, 1913.  As such we should remember the Fed this Christmas and every Christmas —  every day! — for without it we would've had no "war to end all wars" and its record-setting slaughter, no Great Depression, no follow-up world war, and no modern world plagued with the fallout from Keynesianism.  

Let's not forget Mr. Wilson, either, usually regarded as among the top ten greatest presidents.

If war is the health of the state, then counterfeiting is the health of war. Taxes alone wouldn't begin to provide the revenue needed to keep the rest of the world in line and the home folks dependent and obedient.   Onward central bankers!  Onward counterfeiting! 

Conclusion

Offhand, I can't think of a bigger racket than central banking and fiat money.

Wednesday, December 3, 2014

Think small — very small — incredibly small

Do you have forebodings about the future?  I do, which is why I’ve been looking closer at the transformations technology promises to bring us.

First, a brief review.  I’ve previously written about the rapid pace of technological development (here and here) and Ray Kurzweil’s point that though technology is growing exponentially, we experience it linearly, and we tend to base our expectations on our linear experience.  (See his essay “The Law of Accelerating Returns” for details.)




Here’s an example. When the Human Genome Project began in 1990 he predicted the project would be completed in 15 years.  Almost no one believed him.  After a year of work, biochemists had succeeded it transcribing one ten-thousandth of the genome.  Seven years later a mere one percent had been finished.  One imagines the laughing was well underway.  But the project actually finished ahead of schedule, in 2003.

In The Singularity is Near Kurzweil explains how this happened:
Scientists are trained to be skeptical, to speak cautiously of current research goals, and to rarely speculate beyond the current generation of scientific pursuit. This may have been a satisfactory approach when a generation of science and technology lasted longer than a human generation, but it does not serve society’s interests now that a generation of scientific and technological progress comprises only a few years.
He has been amazingly accurate in his predictions, and over the years he’s turned those predictions into a fortune inventing and marketing products based on his expectations of the technologies that will be available in years ahead. “Invention is a lot like surfing,” he says, “you have to catch the wave at the right time.”

Kurzweil sees three overlapping areas where growth is increasing exponentially and which will transport us to a world formerly associated with science fiction and fantasy: genetics, nanotechnology, and robotics.  

Our atomically-precise future

In this article I want to discuss nanotechnology — a term popularized by K. Eric Drexler in his 1986 book Engines of Creation: The Coming Era of Nanotechnology (online here) — and its implications for our economic lives.  

By “nanotechnology” Drexler means “technology based on the manipulation of individual atoms and molecules to build structures to complex, atomic specifications.”  To distinguish his meaning from other, more inclusive definitions he uses the term atomically-precise manufacturing (APM).  APM will build things by rearranging atoms and using those nano-structures as building blocks for larger products.

What kinds of products?  Everything being built today by conventional means and other products that can’t be built today.

Materials are characterized by their arrangement of atoms — if we rearrange the atoms in coal a certain way we can produce a diamond.  Atomically-precise control of materials can produce patterns of atoms that are out of reach of today’s technologies.

Atomic precision starts with small-molecule feedstocks, atomically precise by nature and often available at a low cost per kilogram. A sequence of atomically precise processing steps then enables precise control of the structure of materials and components, yielding products with performance improved by factors that can range from ten to over one million.  [my emphasis]
APM will change industrial production beyond recognition or replace it outright.  
Traditional manufacturing builds in a "top down" fashion, taking a chunk of material and removing chunks of it - for example, by grinding, or by dissolving with acids - until the final product part is achieved. The goal of nanotechnology is to instead build in a "bottom-up" fashion, starting with individual molecules and bringing them together to form product parts in which every atom is in a precise, designed location.  [Foresight Institute]
APM-based technologies will [from Drexler, Radical Abundance]:
  • Slash resource consumption and toxic emissions
  • Provide the infrastructure for low-cost solar energy and a carbon-neutral economy
  • Produce better products at far lower cost than today
  • Collapse long specialized supply chains to a few steps of local production
  • Transform daily life, labor, and the structure of society on Earth, just as the agricultural, industrial, and information revolutions have done
Drexler likens APM to digital technology transferred to the material world, in which microblocks are combined much like ink droplets to form endless, intricate patterns.
A better analogy than ink droplets, however, would be blocks of the sort found in high-end Lego sets, which include not only blocks in different shapes and colors, but also blocks that provide intricate, functional parts such as motors, gear trains, sensors, and computers.
When advanced APM arrives within the next decade you’ll have a box on your desk that will allow you to do with atoms what you presently do with bits and pixels.  At roughly the same price.  But without worrying about atoms, just as you don’t concern yourself with bits and pixels.  On a larger scale, advanced APM will mean, for example,
replacing an enormous automobile factory and all of its multi-million dollar equipment with a garage-sized facility that can assemble cars from inexpensive, microscopic parts, with production times measured in minutes. 
The technologies that can make these visions real are emerging—under many names, behind the scenes, with a long road still ahead, yet moving surprisingly fast.
The need for engineering

As with the stunning nature of Kurzweil’s predictions for the decades ahead (which you really ought to check out), it’s difficult to imagine the economic world as we know it being turned upside down because we’re slipping past the knee of an exponential curve.  Yet APM is not waiting for some scientific breakthrough to make it a real possibility.  The road ahead “has no gaps, no chasms to cross,” Drexler says.

What is missing from APM research?  Better engineering; specifically, “AP molecular systems engineering.”
No matter how research-intensive a project may be, work coordinated around concrete engineering objectives will eventually be required to produce concrete engineering results. . . . 
The semiconductor industry provides a model for coordinating research to advance the technology of an entire field. What’s more, the achievements of semiconductor engineers give us a sense of the potential scale of results, for it was their work that brought us nanoscale digital information systems and today’s Information Revolution.
Roadmapping has been the key to the success of the semiconductor industry.  

In 2007, under the auspices of the Waitt Family Foundation, researchers in nanotechnology finally produced a guiding document for their field: Productive Nanosystems: A Technology Roadmap.  The Executive Summary tells us that: 
It is uncontroversial that expanding the scope of atomic precision will dramatically improve high-performance technologies of all kinds, from medicine, sensors, and displays to materials and solar power. Holding to Moore’s law demands it, probably in the next 15 years or less. [My emphasis]
Assuming the document is up-to-date, APM should be here by or before 2022.  Mark your calendars.

3D Printing — a hint of the APM future

The nascent technology of 3D printing is a stepping stone between traditional manufacturing methods and APM.  Drexler contrasts 3D with traditional processes:
Some traditional methods make a shape all at once using a costly, specialized tool, like a mold to shape plastic, a die to stamp steel, or an optical mask in semiconductor lithography. Other traditional methods carve shapes by removing small bits of material using general-purpose equipment like lathes, drills, and milling machines. 
3D printing, by contrast, makes shapes by adding small bits of material using general-purpose machines guided by digital data files. 3D printing can make shapes beyond the reach of casting or carving.
One 3D printer is a kit you assemble yourself called the RepRap.  With a RepRap 3D printer, you can “print” the parts that make up a RepRap and assemble them yourself.  According to the video at the RepRap website, the RepRap project aims to put a factory in every home — a factory that can make more factories.  RepRap has a slogan, “wealth without money.” 

RepRap and other 3D printer user communities stir memories of the Homebrew Computer Club that launched the PC revolution in the mid-Seventies.  The Club spawned many pioneers in the microcomputer industry, including the two Steves.  The MITS Altair 8800 was one of the first kit computers, comparable to the RepRap today.  Unlike Altair, RepRap can self-replicate.

Conclusion

There are downsides to every technology.  As with clubs and axes once used to obtain food and shelter, APM technologies can be weaponized.  The good news is that APM, like information technologies, has a strong decentralizing and price deflationary component.  In this sense it works in favor of individuals and free markets, and against the Keynesian states we live under.  The APM revolution has the potential to dramatically empower people, as we’re seeing with the information revolution today. 

Though there are many questions one could raise about APM, I have one in particular that I invite any APM researcher to address: If APM continues to develop into a Kurzweil future, will someone someday be able to “print” gold and other precious metals?  

And further: The whole of economics is based on scarcity.  APM won’t eliminate scarcity, but it could surely relegate it to a position of less importance since common feedstocks will replace scarce resources.  What does a world with scant scarcity look like?

We are encouraged by psychologists to live in the present, in the here-and-now.  The here-and-now has many dimensions, one of which is the legacy of innumerable statists.  I look forward to a “present” when their legacy is found only in history books.  Technology will help get us there.

Friday, October 24, 2014

Who rules? Information Technology

Natural systems show us only lower bounds to the possible, in cell repair as in everything else.  — K. Eric Drexler, Engines of Creation, p. 105
Our ability to create models— virtual realities—in our brains, combined with our modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips. — Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (Kindle Locations 9409-9412; all subsequent references in this format refer to this source) 
The combination of nanotechnology and advanced AI will make possible intelligent, effective robots; with such robots, a state could prosper while discarding anyone, or even (in principle) everyone. — K. Eric Drexler, Engines of Creation, p. 176

Along with the massive money printing and debt-laden economy our overlords insist we need, there is another economy, so to speak, that defies their intentions.  In the world of technology the Keynesian horror known as price deflation is the overpowering fact.  Far from bringing economic calamity, the accelerating growth of a widening range of technologies is proving resistant to the Keynesian virus of central bank inflation.  And as these technologies merge with our minds and bodies in increasingly diverse and intimate ways, decentralize and revolutionize nearly every aspect of our economy and culture, the world as we know it today will disappear during our lifetimes.

There are at least three reasons why today’s world will soon be ancient history:

1. The life force of capitalism (creativity, entrepreneurship, competition, free markets) is still alive, especially in information technology. 
Ray Kurzweil (March 31, 2011 interview): The smartphones we carry around in our pockets are a billion times more powerful — per dollar —  than the computer I used at MIT in the late 1960s.  They're also 100,000 times smaller.  In 25 years our cell phones will be the size of a blood cell and more powerful. (6:10)
2.  Once a technology becomes an information technology it is subject to the Law of Accelerating Returns, meaning it advances exponentially.
Human biology and medicine historically progressed at a linear rate until they were transformed by information technology.  When the government version of the Human Genome Project began in 1990, for example, critics said it would take thousands of years to finish, given the speed at which the genome could then be scanned.  Yet the 15 year project finished slightly ahead of schedule.
3.  The spread of information technology introduces a deflationary effect that expands with advancements in technology.
You could buy one transistor for a dollar in 1968; in 2002 a dollar purchased about ten million transistors. (Kindle 1232)
Despite [the] massive deflation in the cost of information technologies, demand has more than kept up. The number of bits shipped has doubled every 1.1 years, faster than the halving time in cost per bit, which is 1.5 years.  As a result, the semiconductor industry enjoyed 18 percent annual growth in total revenue from 1958 to 2002.  The entire information-technology (IT) industry has grown from 4.2 percent of the gross domestic product in 1977 to 8.2 percent in 1998. (Kindle 1263-1266)
As Kurzweil has often articulated, exponential growth itself is growing exponentially and applies to a wide range of technologies, from electronic to biological.  Nor does growth depend on a specific paradigm, such as Moore’s Law (shrinking of components on an integrated circuit).  Since the U.S. census of 1890, there have been five paradigms of computing, each one showing exponential growth in price-performance — electromechanical, relays, vacuum tubes, discrete transistors, and integrated circuits.  Each of these paradigms follows an S-curve life cycle — slow growth, followed by explosive growth, ending in a leveling off as it matures.

As a paradigm begins to stall, pressure grows for a replacement paradigm.  Engineers were shrinking vacuum tubes in the 1950s while transistors were making their way into portable radios, and they later replaced vacuum tubes in computers.  Moore’s Law will fade around the end of this decade and will be replaced by a sixth paradigm, which will likely be three-dimensional molecular computing. 

More specifically, researchers have been experimenting with nanotubes — carbon atoms rolled up into a seamless tube — to replace silicon in computers.  Because they’re very small nanotubes can achieve very high densities.  Last year, Stanford University engineers built a carbon nanotube computer that was comparable in performance to a 1971 Intel 4004.  Peter Burke at California/Irvine in the peer-reviewed journal Nano Letters says the theoretical speed limit of nanotube transistors should be one terahertz (1,000 GHz).  Kurzweil estimates that a cubic inch of nanotube circuitry would be up to 100 million times more powerful than the human brain. (Kindle 1893) 

Working smarter 

As the computing substrate evolves and becomes orders of magnitude faster, we’re seeing evidence that the software component will lag only slightly behind the hardware advancements.

IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997.  Deep Blue, though, was an expensive supercomputer built for that one purpose.  Less than a decade later Deep Fritz 10, running on a desktop PC with two Intel Core 2 Duo CPUs, accomplished a similar feat by defeating the undisputed world champion Vladimir Kramnik in Bonn, Germany.  More recently Christophe Théron's Tiger chess engine, which won tournaments between 2000-2002, has found a home on Apple’s mobile devices. 

Desktop computers and smartphones lack the speed and capacity of 1990s supercomputers.  So how can they be so good at playing chess?  

In The Age of Intelligent Machines (p. 130), Kurzweil estimated that it would take about 40 billion years to make the “perfect” move in a typical chess match, assuming a 30-move game in which a computer would analyze 8^30 possible moves at the rate of one billion moves per second.  (As Kurzweil notes the computer would probably blow up with Big Bang II before the first move was determined.)  Rather than attempt the perfect move, human players consider various paths and “prune away” unpromising moves based on pattern-recognition judgments.

That’s how humans approach chess.  That’s also how Deep Fritz plays chess.  Chess software has become more human-like.

The Evolution of Watson

Watson, the IBM supercomputer that defeated Jeopardy!’s two all-time champions in 2011, added a new element to computer evolution with its advanced natural language abilities.  Watson could not only “read” and retain massive amounts of English-language content, it could understand Alex Trebek’s queries (which were often tricky), determine the probability that it knew the correct answer, and decide whether to respond or not.  Watson had to do all this in less than three seconds, on average.  To meet this challenge IBM developed a computer architecture called DeepQA. (QA refers to question answering.) 

Since donating Watson’s million-dollar Jeopardy award to charities, IBM has opened up six Watson Client Experience Centers around the world, with headquarters in a new, glass-walled office building at 21 Astor Place in Manhattan’s East Village.  In partnership with Spain’s CaixaBank, Watson is now learning Spanish, too.

Perhaps we should be addressing it as Dr. Watson.  Watson can read two hundred million pages of clinical data, cross-reference the symptoms of one million cancer patients or read millions of current medical journals to test hypotheses.  It can do any of these tasks in 15 seconds or less.
So, for example, the query: “Which disease causes ‘Uveitis’ in a patient with a family history of arthritis presenting circular rash, fever, and headache?”, a traditional search engine would answer with a set of links to web pages which a domain expert then has to read through in order to get the relevant information. 
If you ask the same question to Watson, the answer would be: 
76% Lyme disease,
1% Behcet’s disease,
1% Sarcoidosis
And Watson has trimmed down considerably.  The version that starred on Jeopardy occupied a large, air conditioned room that was connected to the TV show through an avatar.  Today’s Watson, according to IBM, is 24 times faster, 90 percent smaller and delivered from the cloud.  Watson has gone from the size of a master bedroom to three stacked pizza boxes.  The latter will seem huge when Watson becomes available on mobile devices.

Can Watson pass for human?

How would Watson technology do in a well-designed Turing test?  Better than most but it still lacks some of the subtleties most people regard as uniquely human.  Alan Turing designed the test on the grounds that if a machine can think it can pass for human.  According to Kurzweil
there are no simple language tricks that would enable a computer to pass a well-designed Turing test. A computer would need to actually master human levels of understanding to pass this threshold.
And when a computer does pass that threshold, Kurzweil will regard it as human.  There of course will be controversy over the results.  By the time most people concede that machines can think, Kurzweil contends, they will already be “thousands of times smarter than us.”  Which means that a strategy for passing the test will be to dumb itself down, which it will easily be smart enough to do.

Given the exponential price-performance growth of technology, he projects that the hardware to simulate the human brain will be available for $1,000 by 2020.  This assumes a PC capable of operating at 10^16 (ten quadrillion) calculations per second, using dedicated ASIC chips, and harvesting unused computational capacity of the internet.  The software to replicate the functions should take about a decade longer.  He’s betting that by 2029 a computer will pass Turing.  “By 2030 it will take a village of human brains (around one thousand) to match a thousand dollars’ worth of computing.”

The engines of revolution

While information technology drives the “strong AI” movement to make computers indistinguishable from humans, it is also engaged in a balancing act elevating humans far beyond their biological origins.  Kurzweil refers to this as the Genetics - Nanotechnology - Robotics revolution.  

Together, these overlapping developments will usher in what he calls the Singularity, “a future period in which technological change will be so rapid and its impact so profound that every aspect of human life will be irreversibly transformed.” [Transcendent Man, the movie]  Kurzweil’s estimated date for the Singularity is 2045 — thirty-one years from now.

So far, most of his many predictions have been either correct or essentially correct.

We are already well underway to the Singularity with the Genetics revolution:
By understanding the information processes underlying life, we are starting to learn to reprogram our biology to achieve the virtual elimination of disease, dramatic expansion of human potential, and radical life extension.  (Kindle 3675-3677)
With an increased understanding of biochemical pathways, researchers are finding ways to control gene expression.  By manipulating peptides (short chains of amino acids), for example, they are finding they can turn off disease-causing genes or turn on helpful genes that may not be expressed in a certain type of cell. 

Yet, reprogramming our biology will never elevate us beyond what Hans Moravec called second-class robots.  
The [nanotechnology] revolution will enable us to redesign and rebuild— molecule by molecule— our bodies and brains and the world with which we interact, going far beyond the limitations of biology.  (Kindle 3679-3681)
Researcher and author Robert A. Freitas, Jr., a pioneering nanotechnology theorist, has designed robotic replacements for human red blood cells called respirocytes that function many times more effectively than their biological counterparts.  A conservative analysis shows if you replaced a portion of your red blood cells with respirocytes, you could do an Olympic sprint for 15 minutes.  Without taking a breath. (Kindle 4675)

Freitas estimates that eliminating 50% of medically preventable conditions would extend human life expectancy to 150 years; eliminating 90% would extend it to 1,000 years or more.

But this is only the beginning.  Kurzweil predicts that over the next two decades 
we will learn how to augment our 100 trillion very slow inter-neuronal connections with high-speed virtual connections via nano-robotics.  This will allow us to greatly boost our pattern-recognition abilities, memories, and overall thinking capacity, as well as to directly interface with powerful forms of computer intelligence. The technology will also provide wireless communication from one brain to another.  
In other words, the age of telepathic communication is almost upon us.
Nanorobots, or nanobots as Kurzweil usually calls them, are robots of size 100 nanometers or less and will play an important role in our future.  They are programmable, introduced through the blood stream without surgery, and can be directed to leave our bodies.  They can rev up our brainpower or amuse us with virtual reality.  

We will be more machine than human, and in another sense, if man is indeed a thinking animal, more human than ever.  
Once our brains are fully online we will be able to download new knowledge and skills. The role of work will be to create knowledge of all kinds, from music and art to math and science. The role of play will also be to create knowledge. In the future, there won’t be a clear distinction between work and play. 
Of the three revolutions the most profound will be robotics, or strong AI.  The Turing test will come and go, computers will begin modifying their software to make themselves smarter, leaving even geeks in the dust, and by the “end of this century, computational or mechanical intelligence will be trillions of trillions of times more powerful than unaided human brain power.”  

We will then infuse the matter and energy of the universe with nonbiological intelligence, causing it to “wake up.” But this will take awhile, unless there's a way to exceed or circumvent the speed of light.

Conclusion

How do we know we will continue expanding at an exponential rate?  

We don’t.  If we sit back and let it happen, it won’t happen.  But given the realities of the world the trend is just about unshakable.  All the wars of the 20th century, the Great Depression, the Cold War, the recent recession — none of it disrupted the exponential progression of technology.  On a project by project level, we obviously can’t make firm predictions, Kurzweil points out.  Fifteen years ago it was clear search engines were coming but we didn’t know which one would prevail.  Which ideas will win out is not known, either.  But the overall trend has been remarkably predictable.  As Kurzweil writes, “We would have to repeal capitalism and every vestige of economic competition to stop this progression.” (Kindle 1647-1648)

The downsides to technology are well-known, and it cannot advance without posing a threat.  There will always be psychopaths and many of them are enthusiastically voted into political office.   Eric Drexler, the father of modern nanotechnology, issues this warning:
The coming breakthroughs will confront states with new pressures and opportunities, encouraging sharp changes in how states behave.  This naturally gives cause for concern.  States have, historically, excelled at slaughter and oppression. (p. 175)
Fortunately, the exponential favors the individual, especially young people who are eager to embrace new technology.  States, as monuments of bureaucracy and incompetence, are overwhelmed by rapid change.  These considerations, in combination with the decentralizing and deflationary aspects of information technology, may ultimately relegate states and their Keynesian priests to the ash bin of history.  


Tuesday, July 1, 2014

From the Big Bang to the Internet — to the Singularity?

The future will be far more surprising than most observers realize: few have truly internalized the implications of the fact that the rate of change itself is accelerating. - Ray Kurzweil, “The Law of Accelerating Returns

It’s only one man talking, making projections about the future of technology and not coincidentally the future of the human race.  Yet many of Ray Kurzweil’s predictions have hit the mark.  In 2009, he analyzed 108 of them and found 89 entirely correct and another 13 “essentially” correct.  “Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong,” he added.  If he can maintain this rate of success, many of his other predictions will happen within the lifetime of most people alive today.  And almost no one is prepared for them. 

Author, inventor, successful entrepreneur, futurist, and currently head of Google’s engineering department, Kurzweil is enthusiastic about the technology explosion that’s coming.  Here are a few predictions he’s made over the years:

In The Age of Intelligent Machines (1990) he said that by the early 2000s computers would be transcribing speech into computer text, telephone calls would be routinely screened by intelligent answering machines, and classrooms would be dominated by computers.  He also said by 2020 there would be a world government, though I suspect he’s backed off from that view.  (See his comment to Gorbachev in 2005 that technology promotes decentralization which ultimately works against tyranny.) 

In The Age of Spiritual Machines (1999) he predicted that by 2009 most books would be read on screens rather than paper, people would be giving commands to computers by voice, and they would use small wearable computers to monitor body functions and get directions for navigation.  

Some of the milder predictions in The Singularity is Near (2005) include $1,000 computers having the memory space of one human brain (10 TB or 1013 bits) by 2018, the application of nano computers (called nanobots) to medical diagnosis and treatment in the 2020s, and the development of a computer sophisticated enough to pass a stringent version of the Turing test — a computer smart enough to fool a human interrogator into thinking it was human — no later than 2029.  

Soon after that, we can expect a rupture of reality called the Singularity.

The Technological Singularity

As used by mathematicians, a singularity denotes “a value that transcends any finite limitation,” such as the value of y in the function y = 1/x.  As x approaches zero, “y exceeds any possible finite limit (approaches infinity).”  Astrophysicists also use the term to refer to the infinite density of a black hole.  

In Artificial Intelligence (AI), the Singularity refers to an impending event generated by entities with greater than human intelligence.  From Kurzweil’s perspective, “the Singularity has many faces. It represents the nearly vertical phase of exponential growth that occurs when the rate is so extreme that technology appears to be expanding at infinite speed. . .  We will become vastly smarter as we merge with our technology.”  

And by “merge” he means (from The Singularity is Near): 
Biology has inherent limitations. For example, every living organism must be built from proteins that are folded from one-dimensional strings of amino acids. Protein-based mechanisms are lacking in strength and speed. We will be able to reengineer all of the organs and systems in our biological bodies and brains to be vastly more capable.
The Singularity, in other words, involves Intelligence Amplification (IA) in humans.  We will, on a voluntary basis, become infused with nanobots: “robots designed at the molecular level, measured in microns.”  Nanobots will have multiple roles within the body, including health maintenance and their ability to vastly extend human intelligence.  
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. In contrast, biological intelligence is effectively of fixed capacity.
As molecular nanotechnology involves the manipulation of matter on atomic or molecular levels, it will be possible to infuse everything on planet earth with nonbiological intelligence.  Potentially, the whole universe could be saturated with intelligence.  

What will the post-Singularity world look like?
Most of the intelligence of our civilization will ultimately be nonbiological. By the end of this century, it will be trillions of trillions of times more powerful than human intelligence. However, to address often-expressed concerns, this does not imply the end of biological intelligence, even if it is thrown from its perch of evolutionary superiority. Even the nonbiological forms will be derived from biological design. Our civilization will remain human— indeed, in many ways it will be more exemplary of what we regard as human than it is today . . . 
The trend tells the story

Kurzweil builds his case on historical trends, as we see in these charts:




Both charts show the same progression, but on different scales.  Life arrives roughly 3.7 billion years ago in the form of biogenic graphite followed by the appearance of cells two billion years later.  As we move from there biological evolution picks up speed, as does human technology.  Viewing the linear plot, everything seems to happen in one day.  Though the time span from the introduction of the personal computer to the World Wide Web took 14 years (from the MITS Altair 8800 in 1975 to Tim Berners-Lee’s proposal in March, 1989), it happened almost instantaneously in the overall picture.   The second chart lays it out for us dramatically.
Exponential forces are very seductive, he says.  Until we get far enough along on the curve, they seem linear.  Once we’re past the “knee” the trend starts to become clear.  Or it should.

Mother Jones ran an article a year ago that illustrates how deceptive exponential trends can be.  Imagine if Lake Michigan were drained in 1940, and your task was to fill it by doubling the amount of water you add every 18 months, beginning with one ounce.  So, after 18 months you add two ounces, 18 months later you add four ounces, and so on.  Coincidentally, as you were adding your first ounce to the dry lake, the first programmable computer in fact made its debut.
You continue.  By 1960 you’ve added 150 gallons.  By 1970, 16,000 gallons.  You’re getting nowhere.  Even if you stay with it to 2010, all you can see is a bit of water here and there.  In the 47 18-month periods that have passed since 1940, you’ve added about 140.7 trillion ounces of water.  You’ve done a lot of work but made almost no progress.  You break out a calculator and find that you need 144 quadrillion more ounces to fill the lake.  
You’ll never finish, right?  Wrong.  You keep filling it as you always have, doubling the amount you add every 18 months, and by 2025 the lake is full.
In the first 70 years, almost nothing.  Then 15 years later the job is finished.
Lake Michigan was chosen because its capacity in fluid ounces is roughly equal to the computing power of the human brain measured in calculations per second.  Eighteen months served as the time interval because it corresponds to Moore’s Law (Intel’s David House modified Moore’s 2-year estimate in the 1970s, saying computer performance would double every 18 months.  As of 2003, it was doubling every 20 months.)  As Kurzweil notes,
We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence . . . .
The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though.
Even as AI progresses, the achievements are often discounted.  In The Age of Intelligent Machines (1990) Kurzweil predicted a computer would beat the world chess champion by 1998.  While musing about this prediction in January, 2011 he said, “I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess.  [IBM’s] Deep Blue defeated Garry Kasparov in 1997, and indeed we were immediately treated to rationalizations that chess was not really exemplary of human thinking after all.”

What was missing?  The ability to handle the “subtleties and unpredictable complexities of human language.”  Computers could never do this.  These were skills forever unique to humans.

Men in Jeopardy!

Then along came Watson.
The victory of the Watson Supercomputer over two Jeopardy! champions is one small step for IBM, one giant leap for computerkind, [Kurzweil proclaimed].
Watson had a three-day match with the champions in February 2011.  In a warm-up match, one of the categories was rhymes.  The host read the clue to the contestants: “A long tiresome speech given by a frothy pie topping.”  Watson quickly replied, “What is a meringue harangue?”  The humans didn’t get it.



How did Watson acquire such encyclopedic knowledge?  Did IBM engineers hand-feed it information?  No.  Like a person, Watson read voluminously.  Unlike a person, it read all 200 million pages of Wikipedia.  

But there’s more.  According to IBM, “Through repeated use, Watson literally gets smarter by tracking feedback from its users and learning from both successes and failures.” [Emphasis added]  IBM also claims “Watson's servers can handle processing 500 gigabytes of information a second, the equivalent of 1 million books, with its shared computer memory” that totals 8 terabytes.

At Google, Kurzweil’s ambition is to do more than train a computer to read Wikipedia.
We want [computers] to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.
When Kurzweil says “everything on the web,” he means everything — including “every email you've ever written, every document, every idle thought you've ever tapped into a search-engine box.”

Conclusion

Some will find comfort at this point contemplating the beauty and majesty of nature.  Perhaps they will find inspiration in trees.  K. Eric Drexler has been inspired by trees and pays tribute to them in Unbounding the Future:
[Trees] gather solar energy using molecular electronic devices, the photosynthetic reaction centers of chloroplasts. They use that energy to drive molecular machines—active devices with moving parts of precise, molecular structure—which process carbon dioxide and water into oxygen and molecular building blocks. They use other molecular machines to join these molecular building blocks to form roots, trunks, branches, twigs, solar collectors, and more molecular machinery. Every tree makes leaves, and each leaf is more sophisticated than a spacecraft, more finely patterned than the latest chip from Silicon Valley. They do all this without noise, heat, toxic fumes, or human labor, and they consume pollutants as they go. Viewed this way, trees are high technology. Chips and rockets aren't.
Trees give a hint of what molecular nanotechnology will be like.
And molecular technology gives a hint of what our future will be like.

Friday, May 30, 2014

Did gold cause the Great Depression?

The culprit responsible for the Crash and the Great Depression can be easily identified: government.  To protect fractional reserve banking and (later) provide a buyer for its debt, government in 1913 created the Fed, putting it in charge of the money supply.  From about July, 1921 to July, 1929 the Fed inflated the money supply by 62%, with the result being the Crash in late October, 1929.  Government, following an aggressive “do something” program for the first time in U.S. history, intervened in numerous ways throughout the 1930s, first under Hoover then much heavier under Roosevelt. The result was not an easing of pain or an acceleration of recovery, but a deepening of the Depression, as Robert Higgs explains in detail.

The preceding is not, of course, the generally accepted explanation.  In conventional discourse, one of the main culprits for causing or at least exacerbating the Depression was the international community’s adherence to the gold standard.  Economist Barry Eichengreen popularized this view, and the Wikipedia entry for Eichengreen includes Ben Bernanke’s summary of Eichengreen’s thesis:
[T]he proximate cause of the world depression was a structurally flawed and poorly managed international gold standard... For a variety of reasons, including among others a desire of the Federal Reserve to curb the US stock market boom, monetary policy in several major countries turned contractionary in the late 1920's—a contraction that was transmitted worldwide by the gold standard.
Why would a money policy that turns contractionary be harmful?  Because it puts the fractional reserve house of cards at risk.  When the inflation is exposed and the gold’s not there, bankers do the Jimmy Stewart scramble.  In Bernanke’s words,
What was initially a mild deflationary process began to snowball when the banking and currency crises of 1931 instigated an international "scramble for gold".
The state’s classical gold standard

The classical gold standard that was in operation throughout the West for much of the 19th century was in fact a fiat gold standard, meaning it operated at the pleasure of the state.  When the state was not pleased with its operation, it suspended it, allowing banks to break their promise to redeem paper currency and deposits in gold coin on demand.

But even under the auspices of the state, the classical gold standard kept a lid on inflation.  Gold was money, and the national currencies were names for a certain weight of gold — a dollar was a name for 1/20 of an ounce of gold, for example. A dollar was not a money backed by gold because a dollar was not money.  It was a conditional substitute for the real thing.  The only thing governments and their banks could directly inflate were their currencies, and if they inflated too much they would lose gold through arbitrage opportunities to countries that didn't inflate as much.  In other words, they couldn’t stay on the classical gold standard and print a lot of money.

Robert Murphy provides an example of gold’s check on inflation in his book, The Politically Incorrect Guide to the Great Depression and the New Deal:  
To give an extreme example, if the dollar depreciated to $10 per pound, then owners of gold could make a killing.  They could sell an ounce of gold in England at 4.25 pounds (the legally defined value of the pound).  Then they could enter the currency markets and receive 42.5 American dollars for the 4.25 pounds (because of our assumed $10 exchange rate).  Then they could present the $42.50 in U.S. currency to the United States Treasury and demand the legally defined payment of roughly 2.06 ounces of gold (because one ounce of gold was defined as $20.67).  Thus, absent shipping and other costs, the gold owners would be able to more than double their gold holdings through this arbitrage action. [pp. 93-94]
As gold was transferred to England, British prices would rise. But as the U.S. lost gold to England, it would be pressured to reduce the number of dollars it created, and prices in the U.S. would fall.  Consumers would start buying less from English producers and more from American producers, reversing the trade imbalance, and the exchange rate would gradually fall from $10/pound to $4.86/pound.

With the advent of WW I the belligerent governments ordered their central banks to stop redeeming their currencies in gold.  The gold standard wouldn’t permit a long war — there was not enough gold to pay for one.  By inflating they not only killed millions of people, they killed the classical gold standard.  Monetary stability was replaced with monetary chaos.

After the war, the inflated money supplies and price levels presented governments with a choice: return to the classical gold standard at lower exchange rates or return to the pars existing before the war.  Britain, in an attempt to re-establish London as the world’s financial center, chose to go back to its old par of $4.86.  One alternative would have been to impose a severe deflation but that was not a serious consideration, nor was it necessary.  If the governments were determined to stay on the classical gold standard, they would’ve stayed with the post-war parities.

The new gold standard

At the Genoa Conference of 1922 and with the architecture of the monetary order firmly in governments’ hands, representatives from 34 countries met to discuss what to do about money.  The problem was obvious.  Just when governments had needed money the most — to engage in war — gold had let them down.  It had proved exceedingly unpatriotic.  On the other hand, paper money, like the girl from Oklahoma, couldn’t say no.  It saluted whatever plans government devised.  The problem, therefore, wasn’t too much paper — the problem was too little gold.  

Gold’s scarcity was now its fatal flaw.  But the scientists in charge of its fate weren’t ready to announce that the money people had been using for 2500 years had suddenly become dangerous to their economic well-being.  So, they gave it a small support role while displaying its name prominently on the marquee of their new scheme, the gold-exchange standard.   Here was the deal they cut:

1.  The United States would stay on the classical gold standard, meaning people could exchange $20.67 in currency and coin at the Treasury for one gold ounce coin.

2.  Britain would redeem pounds in gold and U.S. dollars, while other nations would pyramid their currencies on pounds.  

3. Britain would only redeem pounds in large gold bars.   Gold was thereby removed from the hands of ordinary citizens, allowing a greater degree of monetary inflation.

4.  Britain also pressured other countries to remain at overvalued parities.

In sum, the U.S. pyramided on gold, Britain on dollars, and other European countries on pounds.  When Britain inflated, other countries inflated on top of their pounds instead of redeeming them for gold.  Britain also induced the U.S. to inflate to keep Britain from losing its stock of dollars and gold to the U.S.

It was an international inflationary arrangement with gold brought along for the ride, to give it the appearance of stability and prestige. When it collapsed, as it was bound to, gold served as the scapegoat.

Gold gets a prison sentence

Keynesian and other monetary scientists claim to have a smoking gun. 



In this chart, taken from a paper by Barry Eichengreen and reproduced by Robert Murphy, the output for each country is set to 100, then subsequent measures are a percentage of its deviation from the 1929 benchmark. 

In some chronologies the chart reflects the order in which the countries went off gold, with Japan first, then Britain, Germany, U.S., and finally France.  In Germany and the U.S., industrial output experienced a significant rebound from 1932 to 1933.  But the U.S. didn’t “go off gold” until almost mid-1933, yet industrial output was already rising.  Then it mostly flattened before rising when the dollar was again tied to gold.

As Murphy notes, whatever the discrepancies in the chart, it allegedly shows the beneficial effects that devaluation plays out over time.  Yet the Depression lasted well beyond 1937, with double-digit unemployment rates persisting throughout the 1930s. 

Previous depressions had been over in 2-3 years without confiscating people’s gold.  Why did it suddenly become a major culprit in the 1930s?

And what did it mean “to go off gold”?  It meant any U.S. citizen who didn’t obey FDR’s order to turn in his gold was subject to a $10,000 fine and a 10-year prison sentence — this, for possessing the monetary choice of tens of millions of market participants.  It meant anyone around the world holding dollar-denominated assets, thinking they could redeem them in gold, got stiffed.  Is this how you build confidence in government’s ability to restore prosperity?

Also, is it really surprising economic conditions improved after “going off gold”?  Murphy likens this to an individual homeowner holding a 30-year mortgage on his house declaring he’s “going off his mortgage.”  He says to his mortgage holder, “I’m not paying you anymore.  And I have more guns than you, so tough.”

With no mortgage to pay it’s no surprise that the homeowner’s standard of living rises after “going off his mortgage.” Just because you achieve short-term prosperity by relieving yourself of certain contractual obligations doesn’t prove those obligations were bad.

Conclusion

Government never wants to lose the ability to inflate.  To surrender it would be to surrender sovereignty over money, and it will never do that.  Not willingly.  It’s sometimes said that if people understood what government was doing they would rise up in protest.  But when was the last time you saw average Americans rise up in protest on a national level?  Most Americans are grateful they can pay their bills.  If the digits they use pays the bills why should they care?

For the same reason people cared in the Great Depression. They had trouble paying their bills.  So they accepted the government’s new currency.

If the Austrian theory of the trade cycle is correct, that is, (quoting Murray Rothbard), if “government depression policy has always … aggravated the very evils it has loudly tried to cure,” then another crisis is in the works.  And when the next one arrives, people will again have trouble paying their bills.  Perhaps it will occur to them that not only are they being cheated and their liberty seriously diminished, they can do something about it. 

Perhaps then books like Rothbard’s What Has Government Done to Our Money?, which Depression-era people did not have, will acquire the attention they deserve.



The State Unmasked

“So things aren't quite adding up the way they used to, huh? Some of your myths are a little shaky these days.” “My myths ? They're...