Friday, October 24, 2014

Who rules? Information Technology

Natural systems show us only lower bounds to the possible, in cell repair as in everything else.  — K. Eric Drexler, Engines of Creation, p. 105
Our ability to create models— virtual realities—in our brains, combined with our modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips. — Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (Kindle Locations 9409-9412; all subsequent references in this format refer to this source) 
The combination of nanotechnology and advanced AI will make possible intelligent, effective robots; with such robots, a state could prosper while discarding anyone, or even (in principle) everyone. — K. Eric Drexler, Engines of Creation, p. 176

Along with the massive money printing and debt-laden economy our overlords insist we need, there is another economy, so to speak, that defies their intentions.  In the world of technology the Keynesian horror known as price deflation is the overpowering fact.  Far from bringing economic calamity, the accelerating growth of a widening range of technologies is proving resistant to the Keynesian virus of central bank inflation.  And as these technologies merge with our minds and bodies in increasingly diverse and intimate ways, decentralize and revolutionize nearly every aspect of our economy and culture, the world as we know it today will disappear during our lifetimes.

There are at least three reasons why today’s world will soon be ancient history:

1. The life force of capitalism (creativity, entrepreneurship, competition, free markets) is still alive, especially in information technology. 
Ray Kurzweil (March 31, 2011 interview): The smartphones we carry around in our pockets are a billion times more powerful — per dollar —  than the computer I used at MIT in the late 1960s.  They're also 100,000 times smaller.  In 25 years our cell phones will be the size of a blood cell and more powerful. (6:10)
2.  Once a technology becomes an information technology it is subject to the Law of Accelerating Returns, meaning it advances exponentially.
Human biology and medicine historically progressed at a linear rate until they were transformed by information technology.  When the government version of the Human Genome Project began in 1990, for example, critics said it would take thousands of years to finish, given the speed at which the genome could then be scanned.  Yet the 15 year project finished slightly ahead of schedule.
3.  The spread of information technology introduces a deflationary effect that expands with advancements in technology.
You could buy one transistor for a dollar in 1968; in 2002 a dollar purchased about ten million transistors. (Kindle 1232)
Despite [the] massive deflation in the cost of information technologies, demand has more than kept up. The number of bits shipped has doubled every 1.1 years, faster than the halving time in cost per bit, which is 1.5 years.  As a result, the semiconductor industry enjoyed 18 percent annual growth in total revenue from 1958 to 2002.  The entire information-technology (IT) industry has grown from 4.2 percent of the gross domestic product in 1977 to 8.2 percent in 1998. (Kindle 1263-1266)
As Kurzweil has often articulated, exponential growth itself is growing exponentially and applies to a wide range of technologies, from electronic to biological.  Nor does growth depend on a specific paradigm, such as Moore’s Law (shrinking of components on an integrated circuit).  Since the U.S. census of 1890, there have been five paradigms of computing, each one showing exponential growth in price-performance — electromechanical, relays, vacuum tubes, discrete transistors, and integrated circuits.  Each of these paradigms follows an S-curve life cycle — slow growth, followed by explosive growth, ending in a leveling off as it matures.

As a paradigm begins to stall, pressure grows for a replacement paradigm.  Engineers were shrinking vacuum tubes in the 1950s while transistors were making their way into portable radios, and they later replaced vacuum tubes in computers.  Moore’s Law will fade around the end of this decade and will be replaced by a sixth paradigm, which will likely be three-dimensional molecular computing. 

More specifically, researchers have been experimenting with nanotubes — carbon atoms rolled up into a seamless tube — to replace silicon in computers.  Because they’re very small nanotubes can achieve very high densities.  Last year, Stanford University engineers built a carbon nanotube computer that was comparable in performance to a 1971 Intel 4004.  Peter Burke at California/Irvine in the peer-reviewed journal Nano Letters says the theoretical speed limit of nanotube transistors should be one terahertz (1,000 GHz).  Kurzweil estimates that a cubic inch of nanotube circuitry would be up to 100 million times more powerful than the human brain. (Kindle 1893) 

Working smarter 

As the computing substrate evolves and becomes orders of magnitude faster, we’re seeing evidence that the software component will lag only slightly behind the hardware advancements.

IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997.  Deep Blue, though, was an expensive supercomputer built for that one purpose.  Less than a decade later Deep Fritz 10, running on a desktop PC with two Intel Core 2 Duo CPUs, accomplished a similar feat by defeating the undisputed world champion Vladimir Kramnik in Bonn, Germany.  More recently Christophe Théron's Tiger chess engine, which won tournaments between 2000-2002, has found a home on Apple’s mobile devices. 

Desktop computers and smartphones lack the speed and capacity of 1990s supercomputers.  So how can they be so good at playing chess?  

In The Age of Intelligent Machines (p. 130), Kurzweil estimated that it would take about 40 billion years to make the “perfect” move in a typical chess match, assuming a 30-move game in which a computer would analyze 8^30 possible moves at the rate of one billion moves per second.  (As Kurzweil notes the computer would probably blow up with Big Bang II before the first move was determined.)  Rather than attempt the perfect move, human players consider various paths and “prune away” unpromising moves based on pattern-recognition judgments.

That’s how humans approach chess.  That’s also how Deep Fritz plays chess.  Chess software has become more human-like.

The Evolution of Watson

Watson, the IBM supercomputer that defeated Jeopardy!’s two all-time champions in 2011, added a new element to computer evolution with its advanced natural language abilities.  Watson could not only “read” and retain massive amounts of English-language content, it could understand Alex Trebek’s queries (which were often tricky), determine the probability that it knew the correct answer, and decide whether to respond or not.  Watson had to do all this in less than three seconds, on average.  To meet this challenge IBM developed a computer architecture called DeepQA. (QA refers to question answering.) 

Since donating Watson’s million-dollar Jeopardy award to charities, IBM has opened up six Watson Client Experience Centers around the world, with headquarters in a new, glass-walled office building at 21 Astor Place in Manhattan’s East Village.  In partnership with Spain’s CaixaBank, Watson is now learning Spanish, too.

Perhaps we should be addressing it as Dr. Watson.  Watson can read two hundred million pages of clinical data, cross-reference the symptoms of one million cancer patients or read millions of current medical journals to test hypotheses.  It can do any of these tasks in 15 seconds or less.
So, for example, the query: “Which disease causes ‘Uveitis’ in a patient with a family history of arthritis presenting circular rash, fever, and headache?”, a traditional search engine would answer with a set of links to web pages which a domain expert then has to read through in order to get the relevant information. 
If you ask the same question to Watson, the answer would be: 
76% Lyme disease,
1% Behcet’s disease,
1% Sarcoidosis
And Watson has trimmed down considerably.  The version that starred on Jeopardy occupied a large, air conditioned room that was connected to the TV show through an avatar.  Today’s Watson, according to IBM, is 24 times faster, 90 percent smaller and delivered from the cloud.  Watson has gone from the size of a master bedroom to three stacked pizza boxes.  The latter will seem huge when Watson becomes available on mobile devices.

Can Watson pass for human?

How would Watson technology do in a well-designed Turing test?  Better than most but it still lacks some of the subtleties most people regard as uniquely human.  Alan Turing designed the test on the grounds that if a machine can think it can pass for human.  According to Kurzweil
there are no simple language tricks that would enable a computer to pass a well-designed Turing test. A computer would need to actually master human levels of understanding to pass this threshold.
And when a computer does pass that threshold, Kurzweil will regard it as human.  There of course will be controversy over the results.  By the time most people concede that machines can think, Kurzweil contends, they will already be “thousands of times smarter than us.”  Which means that a strategy for passing the test will be to dumb itself down, which it will easily be smart enough to do.

Given the exponential price-performance growth of technology, he projects that the hardware to simulate the human brain will be available for $1,000 by 2020.  This assumes a PC capable of operating at 10^16 (ten quadrillion) calculations per second, using dedicated ASIC chips, and harvesting unused computational capacity of the internet.  The software to replicate the functions should take about a decade longer.  He’s betting that by 2029 a computer will pass Turing.  “By 2030 it will take a village of human brains (around one thousand) to match a thousand dollars’ worth of computing.”

The engines of revolution

While information technology drives the “strong AI” movement to make computers indistinguishable from humans, it is also engaged in a balancing act elevating humans far beyond their biological origins.  Kurzweil refers to this as the Genetics - Nanotechnology - Robotics revolution.  

Together, these overlapping developments will usher in what he calls the Singularity, “a future period in which technological change will be so rapid and its impact so profound that every aspect of human life will be irreversibly transformed.” [Transcendent Man, the movie]  Kurzweil’s estimated date for the Singularity is 2045 — thirty-one years from now.

So far, most of his many predictions have been either correct or essentially correct.

We are already well underway to the Singularity with the Genetics revolution:
By understanding the information processes underlying life, we are starting to learn to reprogram our biology to achieve the virtual elimination of disease, dramatic expansion of human potential, and radical life extension.  (Kindle 3675-3677)
With an increased understanding of biochemical pathways, researchers are finding ways to control gene expression.  By manipulating peptides (short chains of amino acids), for example, they are finding they can turn off disease-causing genes or turn on helpful genes that may not be expressed in a certain type of cell. 

Yet, reprogramming our biology will never elevate us beyond what Hans Moravec called second-class robots.  
The [nanotechnology] revolution will enable us to redesign and rebuild— molecule by molecule— our bodies and brains and the world with which we interact, going far beyond the limitations of biology.  (Kindle 3679-3681)
Researcher and author Robert A. Freitas, Jr., a pioneering nanotechnology theorist, has designed robotic replacements for human red blood cells called respirocytes that function many times more effectively than their biological counterparts.  A conservative analysis shows if you replaced a portion of your red blood cells with respirocytes, you could do an Olympic sprint for 15 minutes.  Without taking a breath. (Kindle 4675)

Freitas estimates that eliminating 50% of medically preventable conditions would extend human life expectancy to 150 years; eliminating 90% would extend it to 1,000 years or more.

But this is only the beginning.  Kurzweil predicts that over the next two decades 
we will learn how to augment our 100 trillion very slow inter-neuronal connections with high-speed virtual connections via nano-robotics.  This will allow us to greatly boost our pattern-recognition abilities, memories, and overall thinking capacity, as well as to directly interface with powerful forms of computer intelligence. The technology will also provide wireless communication from one brain to another.  
In other words, the age of telepathic communication is almost upon us.
Nanorobots, or nanobots as Kurzweil usually calls them, are robots of size 100 nanometers or less and will play an important role in our future.  They are programmable, introduced through the blood stream without surgery, and can be directed to leave our bodies.  They can rev up our brainpower or amuse us with virtual reality.  

We will be more machine than human, and in another sense, if man is indeed a thinking animal, more human than ever.  
Once our brains are fully online we will be able to download new knowledge and skills. The role of work will be to create knowledge of all kinds, from music and art to math and science. The role of play will also be to create knowledge. In the future, there won’t be a clear distinction between work and play. 
Of the three revolutions the most profound will be robotics, or strong AI.  The Turing test will come and go, computers will begin modifying their software to make themselves smarter, leaving even geeks in the dust, and by the “end of this century, computational or mechanical intelligence will be trillions of trillions of times more powerful than unaided human brain power.”  

We will then infuse the matter and energy of the universe with nonbiological intelligence, causing it to “wake up.” But this will take awhile, unless there's a way to exceed or circumvent the speed of light.


How do we know we will continue expanding at an exponential rate?  

We don’t.  If we sit back and let it happen, it won’t happen.  But given the realities of the world the trend is just about unshakable.  All the wars of the 20th century, the Great Depression, the Cold War, the recent recession — none of it disrupted the exponential progression of technology.  On a project by project level, we obviously can’t make firm predictions, Kurzweil points out.  Fifteen years ago it was clear search engines were coming but we didn’t know which one would prevail.  Which ideas will win out is not known, either.  But the overall trend has been remarkably predictable.  As Kurzweil writes, “We would have to repeal capitalism and every vestige of economic competition to stop this progression.” (Kindle 1647-1648)

The downsides to technology are well-known, and it cannot advance without posing a threat.  There will always be psychopaths and many of them are enthusiastically voted into political office.   Eric Drexler, the father of modern nanotechnology, issues this warning:
The coming breakthroughs will confront states with new pressures and opportunities, encouraging sharp changes in how states behave.  This naturally gives cause for concern.  States have, historically, excelled at slaughter and oppression. (p. 175)
Fortunately, the exponential favors the individual, especially young people who are eager to embrace new technology.  States, as monuments of bureaucracy and incompetence, are overwhelmed by rapid change.  These considerations, in combination with the decentralizing and deflationary aspects of information technology, may ultimately relegate states and their Keynesian priests to the ash bin of history.  

Tuesday, July 1, 2014

From the Big Bang to the Internet — to the Singularity?

The future will be far more surprising than most observers realize: few have truly internalized the implications of the fact that the rate of change itself is accelerating. - Ray Kurzweil, “The Law of Accelerating Returns

It’s only one man talking, making projections about the future of technology and not coincidentally the future of the human race.  Yet many of Ray Kurzweil’s predictions have hit the mark.  In 2009, he analyzed 108 of them and found 89 entirely correct and another 13 “essentially” correct.  “Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong,” he added.  If he can maintain this rate of success, many of his other predictions will happen within the lifetime of most people alive today.  And almost no one is prepared for them. 

Author, inventor, successful entrepreneur, futurist, and currently head of Google’s engineering department, Kurzweil is enthusiastic about the technology explosion that’s coming.  Here are a few predictions he’s made over the years:

In The Age of Intelligent Machines (1990) he said that by the early 2000s computers would be transcribing speech into computer text, telephone calls would be routinely screened by intelligent answering machines, and classrooms would be dominated by computers.  He also said by 2020 there would be a world government, though I suspect he’s backed off from that view.  (See his comment to Gorbachev in 2005 that technology promotes decentralization which ultimately works against tyranny.) 

In The Age of Spiritual Machines (1999) he predicted that by 2009 most books would be read on screens rather than paper, people would be giving commands to computers by voice, and they would use small wearable computers to monitor body functions and get directions for navigation.  

Some of the milder predictions in The Singularity is Near (2005) include $1,000 computers having the memory space of one human brain (10 TB or 1013 bits) by 2018, the application of nano computers (called nanobots) to medical diagnosis and treatment in the 2020s, and the development of a computer sophisticated enough to pass a stringent version of the Turing test — a computer smart enough to fool a human interrogator into thinking it was human — no later than 2029.  

Soon after that, we can expect a rupture of reality called the Singularity.

The Technological Singularity

As used by mathematicians, a singularity denotes “a value that transcends any finite limitation,” such as the value of y in the function y = 1/x.  As x approaches zero, “y exceeds any possible finite limit (approaches infinity).”  Astrophysicists also use the term to refer to the infinite density of a black hole.  

In Artificial Intelligence (AI), the Singularity refers to an impending event generated by entities with greater than human intelligence.  From Kurzweil’s perspective, “the Singularity has many faces. It represents the nearly vertical phase of exponential growth that occurs when the rate is so extreme that technology appears to be expanding at infinite speed. . .  We will become vastly smarter as we merge with our technology.”  

And by “merge” he means (from The Singularity is Near): 
Biology has inherent limitations. For example, every living organism must be built from proteins that are folded from one-dimensional strings of amino acids. Protein-based mechanisms are lacking in strength and speed. We will be able to reengineer all of the organs and systems in our biological bodies and brains to be vastly more capable.
The Singularity, in other words, involves Intelligence Amplification (IA) in humans.  We will, on a voluntary basis, become infused with nanobots: “robots designed at the molecular level, measured in microns.”  Nanobots will have multiple roles within the body, including health maintenance and their ability to vastly extend human intelligence.  
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. In contrast, biological intelligence is effectively of fixed capacity.
As molecular nanotechnology involves the manipulation of matter on atomic or molecular levels, it will be possible to infuse everything on planet earth with nonbiological intelligence.  Potentially, the whole universe could be saturated with intelligence.  

What will the post-Singularity world look like?
Most of the intelligence of our civilization will ultimately be nonbiological. By the end of this century, it will be trillions of trillions of times more powerful than human intelligence. However, to address often-expressed concerns, this does not imply the end of biological intelligence, even if it is thrown from its perch of evolutionary superiority. Even the nonbiological forms will be derived from biological design. Our civilization will remain human— indeed, in many ways it will be more exemplary of what we regard as human than it is today . . . 
The trend tells the story

Kurzweil builds his case on historical trends, as we see in these charts:

Both charts show the same progression, but on different scales.  Life arrives roughly 3.7 billion years ago in the form of biogenic graphite followed by the appearance of cells two billion years later.  As we move from there biological evolution picks up speed, as does human technology.  Viewing the linear plot, everything seems to happen in one day.  Though the time span from the introduction of the personal computer to the World Wide Web took 14 years (from the MITS Altair 8800 in 1975 to Tim Berners-Lee’s proposal in March, 1989), it happened almost instantaneously in the overall picture.   The second chart lays it out for us dramatically.
Exponential forces are very seductive, he says.  Until we get far enough along on the curve, they seem linear.  Once we’re past the “knee” the trend starts to become clear.  Or it should.

Mother Jones ran an article a year ago that illustrates how deceptive exponential trends can be.  Imagine if Lake Michigan were drained in 1940, and your task was to fill it by doubling the amount of water you add every 18 months, beginning with one ounce.  So, after 18 months you add two ounces, 18 months later you add four ounces, and so on.  Coincidentally, as you were adding your first ounce to the dry lake, the first programmable computer in fact made its debut.
You continue.  By 1960 you’ve added 150 gallons.  By 1970, 16,000 gallons.  You’re getting nowhere.  Even if you stay with it to 2010, all you can see is a bit of water here and there.  In the 47 18-month periods that have passed since 1940, you’ve added about 140.7 trillion ounces of water.  You’ve done a lot of work but made almost no progress.  You break out a calculator and find that you need 144 quadrillion more ounces to fill the lake.  
You’ll never finish, right?  Wrong.  You keep filling it as you always have, doubling the amount you add every 18 months, and by 2025 the lake is full.
In the first 70 years, almost nothing.  Then 15 years later the job is finished.
Lake Michigan was chosen because its capacity in fluid ounces is roughly equal to the computing power of the human brain measured in calculations per second.  Eighteen months served as the time interval because it corresponds to Moore’s Law (Intel’s David House modified Moore’s 2-year estimate in the 1970s, saying computer performance would double every 18 months.  As of 2003, it was doubling every 20 months.)  As Kurzweil notes,
We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence . . . .
The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though.
Even as AI progresses, the achievements are often discounted.  In The Age of Intelligent Machines (1990) Kurzweil predicted a computer would beat the world chess champion by 1998.  While musing about this prediction in January, 2011 he said, “I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess.  [IBM’s] Deep Blue defeated Garry Kasparov in 1997, and indeed we were immediately treated to rationalizations that chess was not really exemplary of human thinking after all.”

What was missing?  The ability to handle the “subtleties and unpredictable complexities of human language.”  Computers could never do this.  These were skills forever unique to humans.

Men in Jeopardy!

Then along came Watson.
The victory of the Watson Supercomputer over two Jeopardy! champions is one small step for IBM, one giant leap for computerkind, [Kurzweil proclaimed].
Watson had a three-day match with the champions in February 2011.  In a warm-up match, one of the categories was rhymes.  The host read the clue to the contestants: “A long tiresome speech given by a frothy pie topping.”  Watson quickly replied, “What is a meringue harangue?”  The humans didn’t get it.

How did Watson acquire such encyclopedic knowledge?  Did IBM engineers hand-feed it information?  No.  Like a person, Watson read voluminously.  Unlike a person, it read all 200 million pages of Wikipedia.  

But there’s more.  According to IBM, “Through repeated use, Watson literally gets smarter by tracking feedback from its users and learning from both successes and failures.” [Emphasis added]  IBM also claims “Watson's servers can handle processing 500 gigabytes of information a second, the equivalent of 1 million books, with its shared computer memory” that totals 8 terabytes.

At Google, Kurzweil’s ambition is to do more than train a computer to read Wikipedia.
We want [computers] to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.
When Kurzweil says “everything on the web,” he means everything — including “every email you've ever written, every document, every idle thought you've ever tapped into a search-engine box.”


Some will find comfort at this point contemplating the beauty and majesty of nature.  Perhaps they will find inspiration in trees.  K. Eric Drexler has been inspired by trees and pays tribute to them in Unbounding the Future:
[Trees] gather solar energy using molecular electronic devices, the photosynthetic reaction centers of chloroplasts. They use that energy to drive molecular machines—active devices with moving parts of precise, molecular structure—which process carbon dioxide and water into oxygen and molecular building blocks. They use other molecular machines to join these molecular building blocks to form roots, trunks, branches, twigs, solar collectors, and more molecular machinery. Every tree makes leaves, and each leaf is more sophisticated than a spacecraft, more finely patterned than the latest chip from Silicon Valley. They do all this without noise, heat, toxic fumes, or human labor, and they consume pollutants as they go. Viewed this way, trees are high technology. Chips and rockets aren't.
Trees give a hint of what molecular nanotechnology will be like.
And molecular technology gives a hint of what our future will be like.

Friday, May 30, 2014

Did gold cause the Great Depression?

The culprit responsible for the Crash and the Great Depression can be easily identified: government.  To protect fractional reserve banking and (later) provide a buyer for its debt, government in 1913 created the Fed, putting it in charge of the money supply.  From about July, 1921 to July, 1929 the Fed inflated the money supply by 62%, with the result being the Crash in late October, 1929.  Government, following an aggressive “do something” program for the first time in U.S. history, intervened in numerous ways throughout the 1930s, first under Hoover then much heavier under Roosevelt. The result was not an easing of pain or an acceleration of recovery, but a deepening of the Depression, as Robert Higgs explains in detail.

The preceding is not, of course, the generally accepted explanation.  In conventional discourse, one of the main culprits for causing or at least exacerbating the Depression was the international community’s adherence to the gold standard.  Economist Barry Eichengreen popularized this view, and the Wikipedia entry for Eichengreen includes Ben Bernanke’s summary of Eichengreen’s thesis:
[T]he proximate cause of the world depression was a structurally flawed and poorly managed international gold standard... For a variety of reasons, including among others a desire of the Federal Reserve to curb the US stock market boom, monetary policy in several major countries turned contractionary in the late 1920's—a contraction that was transmitted worldwide by the gold standard.
Why would a money policy that turns contractionary be harmful?  Because it puts the fractional reserve house of cards at risk.  When the inflation is exposed and the gold’s not there, bankers do the Jimmy Stewart scramble.  In Bernanke’s words,
What was initially a mild deflationary process began to snowball when the banking and currency crises of 1931 instigated an international "scramble for gold".
The state’s classical gold standard

The classical gold standard that was in operation throughout the West for much of the 19th century was in fact a fiat gold standard, meaning it operated at the pleasure of the state.  When the state was not pleased with its operation, it suspended it, allowing banks to break their promise to redeem paper currency and deposits in gold coin on demand.

But even under the auspices of the state, the classical gold standard kept a lid on inflation.  Gold was money, and the national currencies were names for a certain weight of gold — a dollar was a name for 1/20 of an ounce of gold, for example. A dollar was not a money backed by gold because a dollar was not money.  It was a conditional substitute for the real thing.  The only thing governments and their banks could directly inflate were their currencies, and if they inflated too much they would lose gold through arbitrage opportunities to countries that didn't inflate as much.  In other words, they couldn’t stay on the classical gold standard and print a lot of money.

Robert Murphy provides an example of gold’s check on inflation in his book, The Politically Incorrect Guide to the Great Depression and the New Deal:  
To give an extreme example, if the dollar depreciated to $10 per pound, then owners of gold could make a killing.  They could sell an ounce of gold in England at 4.25 pounds (the legally defined value of the pound).  Then they could enter the currency markets and receive 42.5 American dollars for the 4.25 pounds (because of our assumed $10 exchange rate).  Then they could present the $42.50 in U.S. currency to the United States Treasury and demand the legally defined payment of roughly 2.06 ounces of gold (because one ounce of gold was defined as $20.67).  Thus, absent shipping and other costs, the gold owners would be able to more than double their gold holdings through this arbitrage action. [pp. 93-94]
As gold was transferred to England, British prices would rise. But as the U.S. lost gold to England, it would be pressured to reduce the number of dollars it created, and prices in the U.S. would fall.  Consumers would start buying less from English producers and more from American producers, reversing the trade imbalance, and the exchange rate would gradually fall from $10/pound to $4.86/pound.

With the advent of WW I the belligerent governments ordered their central banks to stop redeeming their currencies in gold.  The gold standard wouldn’t permit a long war — there was not enough gold to pay for one.  By inflating they not only killed millions of people, they killed the classical gold standard.  Monetary stability was replaced with monetary chaos.

After the war, the inflated money supplies and price levels presented governments with a choice: return to the classical gold standard at lower exchange rates or return to the pars existing before the war.  Britain, in an attempt to re-establish London as the world’s financial center, chose to go back to its old par of $4.86.  One alternative would have been to impose a severe deflation but that was not a serious consideration, nor was it necessary.  If the governments were determined to stay on the classical gold standard, they would’ve stayed with the post-war parities.

The new gold standard

At the Genoa Conference of 1922 and with the architecture of the monetary order firmly in governments’ hands, representatives from 34 countries met to discuss what to do about money.  The problem was obvious.  Just when governments had needed money the most — to engage in war — gold had let them down.  It had proved exceedingly unpatriotic.  On the other hand, paper money, like the girl from Oklahoma, couldn’t say no.  It saluted whatever plans government devised.  The problem, therefore, wasn’t too much paper — the problem was too little gold.  

Gold’s scarcity was now its fatal flaw.  But the scientists in charge of its fate weren’t ready to announce that the money people had been using for 2500 years had suddenly become dangerous to their economic well-being.  So, they gave it a small support role while displaying its name prominently on the marquee of their new scheme, the gold-exchange standard.   Here was the deal they cut:

1.  The United States would stay on the classical gold standard, meaning people could exchange $20.67 in currency and coin at the Treasury for one gold ounce coin.

2.  Britain would redeem pounds in gold and U.S. dollars, while other nations would pyramid their currencies on pounds.  

3. Britain would only redeem pounds in large gold bars.   Gold was thereby removed from the hands of ordinary citizens, allowing a greater degree of monetary inflation.

4.  Britain also pressured other countries to remain at overvalued parities.

In sum, the U.S. pyramided on gold, Britain on dollars, and other European countries on pounds.  When Britain inflated, other countries inflated on top of their pounds instead of redeeming them for gold.  Britain also induced the U.S. to inflate to keep Britain from losing its stock of dollars and gold to the U.S.

It was an international inflationary arrangement with gold brought along for the ride, to give it the appearance of stability and prestige. When it collapsed, as it was bound to, gold served as the scapegoat.

Gold gets a prison sentence

Keynesian and other monetary scientists claim to have a smoking gun. 

In this chart, taken from a paper by Barry Eichengreen and reproduced by Robert Murphy, the output for each country is set to 100, then subsequent measures are a percentage of its deviation from the 1929 benchmark. 

In some chronologies the chart reflects the order in which the countries went off gold, with Japan first, then Britain, Germany, U.S., and finally France.  In Germany and the U.S., industrial output experienced a significant rebound from 1932 to 1933.  But the U.S. didn’t “go off gold” until almost mid-1933, yet industrial output was already rising.  Then it mostly flattened before rising when the dollar was again tied to gold.

As Murphy notes, whatever the discrepancies in the chart, it allegedly shows the beneficial effects that devaluation plays out over time.  Yet the Depression lasted well beyond 1937, with double-digit unemployment rates persisting throughout the 1930s. 

Previous depressions had been over in 2-3 years without confiscating people’s gold.  Why did it suddenly become a major culprit in the 1930s?

And what did it mean “to go off gold”?  It meant any U.S. citizen who didn’t obey FDR’s order to turn in his gold was subject to a $10,000 fine and a 10-year prison sentence — this, for possessing the monetary choice of tens of millions of market participants.  It meant anyone around the world holding dollar-denominated assets, thinking they could redeem them in gold, got stiffed.  Is this how you build confidence in government’s ability to restore prosperity?

Also, is it really surprising economic conditions improved after “going off gold”?  Murphy likens this to an individual homeowner holding a 30-year mortgage on his house declaring he’s “going off his mortgage.”  He says to his mortgage holder, “I’m not paying you anymore.  And I have more guns than you, so tough.”

With no mortgage to pay it’s no surprise that the homeowner’s standard of living rises after “going off his mortgage.” Just because you achieve short-term prosperity by relieving yourself of certain contractual obligations doesn’t prove those obligations were bad.


Government never wants to lose the ability to inflate.  To surrender it would be to surrender sovereignty over money, and it will never do that.  Not willingly.  It’s sometimes said that if people understood what government was doing they would rise up in protest.  But when was the last time you saw average Americans rise up in protest on a national level?  Most Americans are grateful they can pay their bills.  If the digits they use pays the bills why should they care?

For the same reason people cared in the Great Depression. They had trouble paying their bills.  So they accepted the government’s new currency.

If the Austrian theory of the trade cycle is correct, that is, (quoting Murray Rothbard), if “government depression policy has always … aggravated the very evils it has loudly tried to cure,” then another crisis is in the works.  And when the next one arrives, people will again have trouble paying their bills.  Perhaps it will occur to them that not only are they being cheated and their liberty seriously diminished, they can do something about it. 

Perhaps then books like Rothbard’s What Has Government Done to Our Money?, which Depression-era people did not have, will acquire the attention they deserve.

Tuesday, May 13, 2014

The Great Enabler

Commentaries about World War I frequently talk about causes and consequences but almost never mention the enablers.  At best, they might mention them approvingly, as if we were fortunate to have had the Fed and the income tax, along with the ingenuity of the Liberty Bond programs, to finance our glorious role in that bloodbath.

Economist Benjamin Anderson, whose Economics and the Public Welfare has contributed greatly to our understanding of the period 1914-1946, and is a book I highly recommend, nevertheless takes as a given that the Fed and income tax had a job to do, and that job was supporting U.S. entry into World War I.  After citing figures purporting to show how relatively restrained bank credit expansion was during the war, he writes:  
We had to finance the Government with its four great Liberty Loans and its short-term borrowing as well. We had to transform our industries from a peace basis to a war basis. We had to raise an army of four million men and send half of them to France. We had to help finance our allies in the war, and above all, to finance the shipment of goods to them from the United States and from a good many neutral countries. [p. 35]
We had to do none of these things.  Only the government made them necessary, and the government was not acting on behalf of its constituents when it formally entered the war in April, 1917.  The U.S. was not under serious threat of attack.  The population at large, Ralph Raico tells us, “acquiesced, as one historian has remarked, out of general boredom with peace, the habit of obedience to its rulers, and a highly unrealistic notion of the consequences of America’s taking up arms.”  [p. 33]  Later on he reports that 
In the first ten days after the war declaration, only 4,355 men enlisted; in the next weeks, the War Department procured only one-sixth of the men required. [p. 40]
Bored with peace they may have been, but it was hardly reflected in the number of volunteers.

Winners and Losers

While the war industries were poised to rake in record profits, Marine Major General Smedley Butler, who was awarded his second Congressional Medal of Honor in 1917, provides details of the fighting men’s share in this bonanza.  For the soldiers, 
it was decided to make them help pay for the war, too. So, we gave them the large salary of $30 a month. 
All they had to do for this munificent sum was to leave their dear ones behind, give up their jobs, lie in swampy trenches, eat canned willy (when they could get it) and kill and kill and kill . . . and be killed. 
But wait!
Half of that wage (just a little more than a riveter in a shipyard or a laborer in a munitions factory safe at home made in a day) was promptly taken from him to support his dependents, so that they would not become a charge upon his community. Then we made him pay what amounted to accident insurance -- something the employer pays for in an enlightened state -- and that cost him $6 a month. He had less than $9 a month left. 
Then, the most crowning insolence of all -- he was virtually blackjacked into paying for his own ammunition, clothing, and food by being made to buy Liberty Bonds. Most soldiers got no money at all on pay days.
We made them buy Liberty Bonds at $100 and then we bought them back -- when they came back from the war and couldn’t find work -- at $84 and $86. And the soldiers bought about $2,000,000,000 worth of these bonds!  
Thomas Woodrow Wilson was not only a disaster to freedom and free markets, this "near-great" president awarded domestic government workers inflation compensation in 1917-1918 but omitted the men in the trenches overseas doing the fighting.  Harding and Coolidge were no different, vetoing versions of what became known as the World War Adjusted Compensation Act, which would grant a benefit to veterans.  Congress finally overrode Coolidge’s veto in May, 1924.    
The “bonuses” awarded the veterans were silver certificates that came with a catch — although the men could borrow against them, they couldn’t redeem them until 1945.  (!)  When the Depression deepened in 1932, a so-called Bonus Army of veterans, family members, and friends marched on Washington to demand immediate payment of their promised compensation.  After a clash with police that left two protestors dead, General Douglas MacArthur led a tank assault that drove the Bonus Army out of Washington.

In 1936 the government decided to replace the silver certificates with Treasury bonds that could be redeemed immediately.  

The Cunning Enabler

One could argue that the existence of states is the true enabler of hell on earth, since only states had entrenched systems of wealth predation and could employ kidnapping (conscription), propaganda, and other means to create a world war. 

But is working for a stateless world a worthwhile use of one’s time?  If two and a half million veterans of the war to end all wars couldn't get the government to pony up a bonus until 19 years after they paid stay-at-home bureaucrats, how could we possibly get rid of government itself?  

Given that states have the power to wipe out all life on the planet, we should at least consider them an alien presence.  That they haven’t reduced the world to ashes already is not a sign of caring and careful leadership.  Combine their monopoly on legal force, nuclear arsenals, a rabid foreign policy, and monumental bureaucratic bungling, along with the steady hum of printing presses and withholding taxes, and you have a formula for turning Mother Earth into a moonscape. 

If we can’t rid the earth of states, we can at least try to disempower them.  Whatever belligerent aspirations U.S. and other world leaders might have, they would be mere pipe-dreams without the wealth-sucking arms of the state.  States that can’t get money for war can’t go to war, or as Pat Buchanan might put it: No money, no war.

And if we had avoided WW I, what might the world look like today?


In a footnote to Rights of Man, Thomas Paine wrote:
It is scarcely possible to touch on any subject, that will not suggest an allusion to some corruption in governments.
Given his proposals for government involvement in our lives, modest though they were, Paine seems to have forgotten his own profound observation.  

We would do well never to forget it.

Tuesday, April 22, 2014

Imagine if we had free prices!

If you were asked how we should go about achieving real economic growth throughout the economy rather than just certain sectors of it, what would you suggest?  Would you revisit the Keynesian toolbox and call for a really, really big stimulus instead of just another really big one?  Would you impose more controls on business, especially the financial sector?  Some people want to revive Glass-Steagall, the gem from the Depression era that was abandoned in 1999 — sound good to you?.  How about officially merging the Fed and the Treasury — i.e., turn “monetary policy” over to the government?  Perhaps you’d break out Sheila Bair’s plan to allow each American household to “borrow $10 million from the Fed at zero interest”?  Her proposal was tongue-in-cheek, you say? Ms. Bair, the former head of the Federal Deposit Insurance Corporation, proposed a plan that in its essentials would be received enthusiastically by those in the know —  provided it was confined to special interests. But if it’s good for some, why not everyone?

“Look out 1 percent, here we come,” Ms. Bair trumpeted. 

Many readers are familiar with the anecdote about a 1681 meeting between French finance minister Jean-Baptiste Colbert and a group of businessmen that included one M. Le Gendre.  Colbert, a mercantilist, was eager for industry to prosper because it would boost tax revenue . . . sort of a fatten-the-goose approach to economics.  When he asked how their government could be of service to the business community, Le Gendre famously replied, "Laissez-nous faire” — “Let us be.”

What?  Tell government to get out of the way?  Who today would even joke about such a proposal?  After all the lessons we’ve learned about markets — that they’re ultimately governed by mysterious “animal spirits” that can only be counterbalanced with deft fiscal policy, that market predators would run riot over the innocent if not restrained, that we need a central bank to keep prices from falling, to ensure the big players don’t go under, and to protect Uncle’s bond market —  keeping government out of the picture is the one proposal that is off the table.

It’s also the reason we’re on course for more and bigger crises.  

Free prices would mean falling prices

Hunter Lewis is today’s M. Le Gendre.  His message is found in the title of his most recent book: Free Prices Now! Fixing the Economy by Abolishing the Fed.  He presents an iron-clad case.

Why the focus on prices?  Why not markets?  
The most reliable barometer of economic honesty is to be found in prices. Honest prices, neither manipulated or controlled, provide investors and consumers with reliable economic signals.  They show, beyond any doubt, what is scarce, what is plentiful, where opportunities lie, and where they do not lie.
In a profit-driven economy with open competition, he points out, the quest for profits will increase supply and drive down costs, which will lower consumer prices.  Lower prices help the poor especially.

While this should be economic common sense, it isn’t.  The world is dominated by debt-soaked Keynesians, who abhor falling prices.  Central banks and governments correctly regard inflation as their savior, as long as it doesn’t get out of hand.   Success for central bankers is a gently climbing “price level,” a term that defies clear understanding.  They’re not bothered in the least by the dollar’s loss of 97% of its purchasing power since the Fed opened its doors a century ago. 

Under a free price system, falling prices are the payoff for successful productivity, Lewis tells us.  During the last decades of the 19th century in the US, prices trended downward while the economy experienced explosive growth along with a rapidly increasing population.  Murray Rothbard notes that “from 1879–1889, while prices kept falling, wages rose 23 percent.” [p. 165]

Nor was this falling prices/growth relationship a historical anomaly.  In a paper published by the Minneapolis Fed in 2004, researchers examined 17 countries in five-year episodes for over 100 years and found “virtually no evidence” of a link between deflation and depression.  They conclude that 
A broad historical look finds many more periods of deflation with reasonable growth than with depression and many more periods of depression with inflation than with deflation.  [emphasis added]
While the Great Depression (1929-1934) is a hotly debated period and does show a link between deflation and depression, it is “not an overwhelmingly tight link.”

Inflated riches versus earned riches

So, why does the Fed continue to inflate?  Inflation has beneficiaries, at least in the short-to-medium run.  Think of the advantages accruing to a counterfeiter.  Not only does the Fed finance the federal deficit, most of the new money it prints passes through Wall Street first, fattening their profits.  

In Chapter 14, Lewis explains how falling prices spread throughout an economy.  He cites the career of Andrew Carnegie, who became the richest man in America by selling steel at ever lower prices.  As he sold to other companies, they too lowered their costs and sold more cheaply.  Consequently, more railroads could be built, and the cost of shipping freight and travel fell.  The cost of drilling wells declined.  Oil and gasoline became cheaper.

But there is widespread fear among mainstream economists and Fed officials that deflation (falling prices) caused the Great Depression.  Does it occur to them that they might have it backwards — that the Depression caused falling prices because certain prices had been elevated by Fed money printing?

In the depression of 1920-21 prices fell precipitously and unemployment soared.  The Fed raised interest rates and the Harding administration slashed spending, policies that are unthinkable in today’s Keynesian climate.  Yet by 1923 unemployment was only 2.4%, and the depression was history.

The Benjamin Strong Fed inflated during the 1920s, creating a bubble economy.  When in 1927 it started to falter he decided to “give a little coup de whiskey to the stock market.”  As a loyal player in the Morgan ambit, Strong inflated to help maintain Britain’s fragile monetary structure.

When the Crash came, first Hoover then Roosevelt acted to keep wages from falling.  Without the ability to reduce wages many companies faced bankruptcy, so they laid off workers on a massive scale.  Those employees who kept their jobs got the equivalent of enormous raises because of frozen wages and falling prices.

After WW II many economists expected the economy to fall back into depression because price and wage controls were ending, and government spending and taxes were being reduced. But even with the return of 10 million veterans, unemployment remained below 5%.  A recession in 1949 raised it to 6% but even this figure was far below anything achieved during the Keynesian heyday of the Great Depression.

Lewis poses the question: What should government do when facing a bust of its own making?  Remember the Hippocratic Oath: First do no harm.  Stop manipulating and controlling prices.  Allow prices the freedom to adjust.  Allow the patient to recover.

Not all prices and wages are out of adjustment.  And of those that are, not all need to fall.  Some should rise.  The economy is not a water tank to be filled or drained until the right level is reached.  Recessions are not punishment, as Keynesians often claim.  They are a way of removing economic wreckage and debris so the economy can move forward.

Fractional reserve banking, the cause of the boom-bust cycle, is both fraudulent and economically unsound, Lewis points out.  It’s fraudulent because it promises to pay depositors on demand with money it has lent elsewhere.  It is unsound because pyramiding loans with new money causes instability. 

Sever all connections between government and money
What kind of money and banking system do we need?  Here, Lewis quotes Lew Rockwell:
F. A. Hayek discusses the only serious means of reform that is open to us. We must completely abolish the central bank. Money itself must be wholly untied from the state. It must be restored as a private good, privately produced for private markets. Government must have no role at all in monetary affairs. Money should be produced by private enterprise alone. Banks must exist only as free-enterprise institutions, with no privileges from the state. . . . 
Let failing banks die. Let profitable banks live. Let the people choose to use any form of money. Let the people choose any means of payment. Let entrepreneurs create any form of financial instrument. Law applies only the way it applies to all other human affairs: punishing force and fraud. Otherwise, the law should have nothing to do with it.
Lewis tells us: “The Fed is no devil.  It is doubtless staffed by many sincere people who have no inkling of the moral and financial devastation they are wreaking.”

It may be true that the Fed is no devil.  But if it were a devil, it could hardly do worse than it has.  If we can’t get the Fed shut down, Lewis adds, we should at least fight for open currency competition.  If people were free to use a different money, they almost certainly would.  With its involuntary customer base depleted, the Fed would either sell its printing presses or close its doors.

“Prices,” Lewis concludes, “should be fully emancipated from government.”

I would argue that everything should be completely emancipated from government, but that’s a topic for another day.  Hunter Lewis’s book is straight-forward and compelling.  Get it and absorb it.