Tag Archives: Finance

Is Bitcoin a viable currency?

The media and blogosphere have been full of Bitcoin discussions recently and almost everyone has an opinion, but most of these opinions are tied to the technology of Bitcoin, that is, whether this new currency represents a major technological revolution in money.  So, most commentary has focused on questions about Bitcoin’s technological advantages: Is it really secure?  Is it truly anonymous?  Can it be counterfeited?  Are transaction costs actually lower?   Here, here and here are a few of examples and they contain comments like “Bitcoin is the first practical solution to a longstanding problem in computer science called the Byzantine Generals Problem.”  That is, they focus on the technology of Bitcoin.

But what of the finance and economics of Bitcoin?  Does it have the economic properties to be a viable currency?  I don’t think so.

Good money had three economic properties and uses.  It is a unit of account, used to measure and write contracts for things like income, wealth, and prices of goods.  It is a means of payment, used to avoid barter.  And it is a store of value, held to be able to make transactions in the future.  Of these three properties the third is the most important.  Unless money has a stable value, it does not serve the purposes that it should.  People will be wary of accepting something that might lose lots of value, and something with a volatile price makes a bad unit of account.

And my argument is not just that Bitcoin has had wild fluctuations in value that undermine its role as a viable currency, but deeper, that Bitcoin is destined to have wild fluctuations – it is poorly designed and conceived and so is likely to fail as a currency.  Why?

First, and primarily, Bitcoin lacks a mechanism for setting the supply of Bitcoin equal to the demand for Bitcoin to maintain its value.  History is replete with examples of governments that tied their hands in the supply of their currencies, much like Bitcoin has done.  What happens?  The value of the currency fluctuates.  Often a lot.  Before the founding of the Fed in the US, the dollar was backed by gold, and gold discoveries lead to inflations, and collapses in the price of gold to recessions and even financial crises.  Since the end of the Great Depression in the US, the Fed has actively managed the money supply to achieve price stability (at some times better than others).

Consider the example of the Y2K scare.  Before January 1, 2000, people were concerned that the change from the year 1999 to the year 2000 could lead to serious errors in computer systems, and in particular that it might become hard to use credit cards or get money out of a bank (or worse, bank deposits might even get lost).  As a result, people withdrew cash before New Year’s, lots of it. (These types of cautionary actions were widespread: governments grounded all airline flights overnight.).  These withdrawals were increased demand for cash that might have driven up the price of dollars – ie. led to deflation and changed interest rates.  But the large increase in the demand for cash did not cause any such real economic effects.  Why?  Because when demand increased the Fed simply expanded the amount of currency in circulation.  When New Year’s came and went without serious incident, people re-deposited their cash and the Fed reduced the money supply. The US price level remained stable.

Similar examples abound.  Prior to the founding of the Fed, the seasonal agricultural cycle lead to big seasonal swings in the demand for credit and currency which lead to seasonal swings in nominal interest rates (that is, the usual interest rate we think of which is the real interest rates plus changes in the value of money, that is, plus inflation).  If Bitcoin gains traction, will it have a seasonal fluctuations in its value that track the seasonal spending patterns of the world.  Will Bitcoins be more valuable in early December and comparably cheap in January?

Every day, central banks supply their currencies in proportion to the needs of the users of their currencies, so as to maintain a stable value for their currencies.  Bitcoin does not have a central bank.  It has a relatively inflexible supply mechanism (known as Bitcoin mining).  As a result, Bitcoin is destined not to have a stable value.  And a volatile price is bad for Bitcoin’s usefulness as a currency.  Central banks are an enormous competitive advantage for traditional currencies that the Bitcoin supply process completely lacks.

A second problem with maintaining a stable value is that digital currency is not really in limited supply. Its proponents will argue that it is.  The Bitcoin technology is carefully, maybe even brilliantly, designed to ensure that the supply grows slowly and it ultimately limited. But what happens when Bitcoin 2.0 comes out?  What if it has slightly better properties than the old technology?  Do people stop using Bitcoin 1.0 entirely leading it to become worthless?  Probably.  Is such a scenario likely?  Well,  think about the potential profits that one could make introducing Bitcoin 2.0, just by keeping a share of the initial number of coins.  These potential profits provide an incentive for the hi-tech business that comes up with a better Bitcoin to take over the digital currency market through advertising, lobbying, payments to businesses and so forth.  Or consider this alternative scenario.  Global banks start to provide currency transfers within their institutions but across borders that are as safe rapid, and low cost as Bitcoin payments.  There is no technological advantage to Bitcoin relative to a global bank with branches in many countries.  The point: while Bitcoin is in limited supply, digital currencies are not and neither are inexpensive ways to transfer money and make payments.

There are several other important cards stacked against Bitcoin.  But I will conclude with only one more., The “money supply” in the every country in the world is actually hard currency times the money multiplier – the ramping up of the hard currency into deposits in banks and lines of credit and gift cards and so forth.  In the US, the money supply – counting all of these money-like assets – is about twenty times the supply of hard currency.  And Bitcoin banking is developing and could go one of two ways.  First, it could be significantly private and unregulated.  The history of unregulated banking is that it is a disaster full of bank runs, volatile price levels currency collapses and so on.  The banking sector’s volatility becomes the volatility of the supply of Bitcoins which becomes price volatility.  Look just recently how the collapse of a single Bitcoin exchange affected the price of Bitcoins.  The second way Bitcoin banking could go would be as a regulated banking sector, becoming part of the tradition banking sector.  But then several claimed benefits of Bitcoin go out the window.  The true, large supply of Bitcoin is governed by banking regulation (but in every country in the world – what a mess!).  And while a Bitcoin is anonymous, a Bitcoin deposit is not anonymous. Once a bank gives you a credit for a Bitcoin and knows who you are, can it see in the Bitcoin chain how it was spent?  Not sure, but I would worry about it.

In sum, I am not worried about the technology – I have complete confidence that people at the other end of the MIT campus can solve almost all of the technological problems.  But the finance is suspect.  I am guessing that Bitcoin either remains small and volatile, with only transactions of suspect legality willing to accept the volatility as the price of true anonymity, or that Bitcoin goes down in history as a bubble, ultimately as worthless as the sequence of zeroes and ones that make up each coin.

What is the true cost of government-backed credit?

The U.S. government is arguably the largest financial institution in the world. If you add the outstanding stock of government loans, loan guarantees, pension insurance, deposit insurance and the guarantees made by federal entities such as Fannie Mae and Freddie Mac, you get to about $18 trillion of government-backed credit. Through those activities, the government has a first-order effect on the allocation of capital and risk in the economy.

Professor Deborah Lucas

The question of what those commitments cost the public is important; accurate cost assessments are necessary for informed decisions by policymakers, effective program management, and meaningful public oversight.  My research and that of others has shown that if one takes a financial economics approach to answering that question — one that is consistent with the methods used by private financial institutions to evaluate such costs — it leads to significantly higher estimates than the approach currently used by the federal government.

At the core of the problem are the rules for government accounting, which by law require that costs for most federal credit programs be estimated using a government borrowing rate for discounting expected cash flows, regardless of the riskiness of those cash flows. That practice systematically understates the cost to the government because it neglects the full cost of risk to taxpayers, who are effectively equity holders in the government’s risky loans and guarantees.

An alternative approach to cost estimation — a fair value approach based on market prices — would fully take into account the cost of risk. Fully accounting for the cost of risk makes a significant difference:  An estimate of the official budgetary cost of credit programs in 2013 shows them as generating savings for the government of $45 billion, whereas a fair value estimate suggests the programs will cost the government about $12 billion.

The understatement of cost has important practical consequences. For example, it may favor expanding student loans over Pell grants because student loans appear to make money for the government. It also creates the opportunity for “budgetary arbitrage,” whereby the government can buy loans at market prices and book a profit that reduces the reported budget deficit, as it did in several instances during the recent financial crisis.

That perspective on how credit program costs should be measured is widely shared by financial economists, although until recently the issue has not received much attention by academics. That changed last month when the Financial Economists Roundtable (FER), of which I am a member, issued a statement on this matter, writing: “The apparent cost advantage of government credit assistance over private lenders is, in the opinion of the FER, primarily due to [government] accounting rules, rather than to any inherent economic advantage of the government.”

According to the FER’s statement, the solution to this undervaluation is to amend current accounting rules to require an approach to cost estimation that fully recognizes the cost of risk in the government’s credit programs. The group maintains (and I agree) that such a change “would make the true budgetary implications of credit assistance more transparent to program administrators, policy makers and the public.”

Prof. Deborah Lucas is the author of “Valuation of Government Policies and Projects.” She previously served as assistant director and chief economist at the Congressional Budget Office. 

Moore’s Law, Murphy’s Law, and the Financial System 2.0

Gordon Moore is one of the great visionaries of our time.  In 1965—three years before he co-founded Intel, now the largest semiconductor chip manufacturer in the world—Moore published an article in Electronics Magazine where he observed that the number of transistors that could be placed onto a chip seemed to double every year.  This simple observation—an empirical formula implying a constant rate of growth—led Moore to extrapolate an increase in computing potential from sixty transistors per chip in 1965 to sixty thousand in 1975, a number that seemed absurd at the time but which was realized on schedule a decade later.  With some revisions, “Moore’s Law” has been a remarkably prescient forecast of the growth of the semiconductor industry over the last 40 years.

But Moore’s Law has come to mean much more than just a measure of progress in chip design and fabrication.  It’s become a cultural icon of the Information Age that represents the very core of what drives modern society: the exponential growth of technology. Thanks to breakthroughs in agricultural, medical, manufacturing, transportation, and information technologies, we’ve managed to increase the population of Homo sapiens on this planet from about 1.5 billion in 1900 to nearly 7 billion today.  This more-than-quadrupling of our numbers refutes the dire predictions of the 18th century economist Thomas Malthus, who reasoned that our species was doomed because populations grow exponentially while food supplies grow linearly.  Apparently, agriculture has a Moore’s Law of its own.

Moore’s Law now affects a broad spectrum of modern life. It influences everything from household appliances to biomedicine to national defense, but its impact has been especially strong in the financial system.  As computing has become faster, cheaper, and better at automating complex tasks, financial institutions have been able to increase the scale of their activities proportionally.  At the same time, simple population growth has increased the demand for financial services.  After all, most individuals are born into this world without savings, income, housing, food, education, or employment; all of these necessities require financial transactions of one sort or another.  It shouldn’t come as a surprise, then, that Moore’s Law also applies to the financial system.  From 1929 to 2009 the total market capitalization of the U.S. stock market has doubled every decade.  The total trading volume of stocks in the Dow Jones Industrial Average doubled every 7.5 years during this period, but in the most recent decade, the pace has accelerated: now the doubling occurs every 2.9 years, growing almost as fast as the semiconductor industry.

But the financial industry differs from the semiconductor industry in at least one important respect: human behavior plays a more significant role in finance. As the great physicist Richard Feynman once said, “Imagine how much harder physics would be if electrons had feelings.”  While financial technology undoubtedly benefits from Moore’s Law, it must also contend with Murphy’s Law, “whatever can go wrong will go wrong”, as well as its technology-specific corollary, “whatever can go wrong will go wrong faster and bigger when computers are involved.”

The experience of Knight Capital Group is a case in point.  As one of the largest and most technologically advanced firms in the United States, Knight was responsible for $20 billion of trades on the New York Stock Exchange each day, about a sixth of the exchange’s total daily trading volume.  Much of Knight’s trading was handled entirely electronically; Knight’s success and growth as a broker/dealer was a direct consequence of Moore’s Law.

But it was Murphy’s Law, not Moore’s Law, that governed the events of August 1, 2012.  On that fateful day, Knight’s electronic trading system sent out millions of incorrect orders across some 150 stocks for forty-five minutes after the opening bell. In less than an hour, Knight’s system generated losses of about $440 million—approximately $10 million per minute—which far exceeded the $365 million of cash Knight currently had on hand.

If this were an isolated incident, it would be unremarkable.  Software errors occur all the time in every industry—remember Y2K?—but no one would have guessed that as technologically sophisticated a firm as Knight would be the one to get hit with a major software failure.  Over the past several years, the number of technology-related problems in the financial industry seems to be growing in frequency and severity.  More worrisome is the fact that these glitches are affecting parts of the industry that previously had little to do with technology, such as initial public offerings (IPOs).  IPOs have been a staple of modern capitalism since the launch of the Dutch East India Company in 1602, and there is evidence of publicly traded firms going back to the Roman Republic in the second century B.C.   How could software errors possibly affect such a basic and well-understood financial transaction?

On Friday, May 18th, 2012 the social networking pioneer, Facebook, had the most highly anticipated IPO in recent financial history.  With over $18 billion in projected sales, Facebook could easily have listed on the New York Stock Exchange along with the “big boys” like Exxon and General Electric, so Facebook’s choice to list on NASDAQ instead was quite a coup for the newer market.  The combination of Facebook’s computing prowess and NASDAQ’s technology focus seemed tailor-made for each other, a financial fashion statement in keeping with Facebook CEO Mark Zuckerberg’s hoodie hacker persona.

Facebook’s debut was less impressive than most investors had hoped, but its lackluster price performance was overshadowed by a more disquieting technological problem with its opening.  Another unforeseen glitch, this time in NASDAQ’s IPO system, interacted unexpectedly with trading behavior to delay Facebook’s opening by thirty minutes, an eternity in today’s high-frequency environment.  This was a beautiful if unfortunate and costly illustration of Murphy’s Law at work.  As the hottest IPO of the last decade, Facebook’s opening attracted extraordinary interest from investors, while NASDAQ prided itself on its ability to handle high volumes of trades. NASDAQ’s IPO Cross software was reportedly able to compute an opening price from a stock’s initial bids and offers in less than forty microseconds (a human eyeblink lasts eight thousand times as long). However, on the morning of May 18, 2012 interest in Facebook was so heavy that it took NASDAQ’s computers up to five milliseconds to calculate its opening trade, a hundred times longer than usual.  During this calculation, NASDAQ’s order system allowed investors to change their orders up to the print of the opening trade on the tape. But these few extra milliseconds before the print were more than enough for new orders and cancellations to enter NASDAQ’s auction book.  These new changes caused NASDAQ’s IPO software to recalculate the opening trade, during which time even more orders and cancellations entered its book, compounding the problem in an endless circle. As the delay continued, even more traders cancelled their previous orders, “in between the raindrops,” as NASDAQ’s CEO Robert Greifeld rather poetically explained.

This glitch created something software engineers call a “race condition,” in this case a race between new orders and the print of the opening trade, an infinite loop which required manual intervention to exit, something that hundreds of hours of testing had missed.  By the time the system was reset, NASDAQ’s programs were running nineteen minutes behind real time.  Seventy-five million shares changed hands during Facebook’s opening auction, a staggering number, but orders totaling an additional thirty million shares took place during this nineteen-minute limbo. This incredible gaffe, which some estimates say cost traders $100 million, eclipsed NASDAQ’s considerable technical achievements in handling Facebook’s IPO.

Less than two months before, another IPO suffered an even more shocking fate.  BATS Global Markets, founded in 2005 as a “Better Alternative Trading System” to NASDAQ and the New York Stock Exchange, held its IPO on March 23, 2012.  BATS operates the third largest stock exchange in the United States; its two electronic markets account for 11% to 12% of all U.S. equity trading volume each day.  BATS was to stock exchanges what Knight Capital was to broker/dealers: among the most technologically advanced firms in its peer group and the envy of the industry.  Quite naturally, BATS decided to list its IPO on its own exchange.  If an organization ever had sufficient “skin in the game” to get it right, it was BATS, and if there were ever a time when getting it right really mattered, it was on March 23.  So when BATS launched its own IPO at an opening price of $15.25, no one expected its price to plunge to less than a tenth of a penny in a second and a half due to a software bug affecting stocks whose ticker symbols began with the letters A and B. (Apple was also affected, but only lost 9.4% over a five-minute interval.)  The ensuing confusion was so great that BATS suspended trading in its own stock, and ultimately cancelled its IPO altogether.

In addition to being fine illustrations of Murphy’s Law in action, these financial disasters are symptoms of a much broader challenge facing modern society: the growing complexity of adaptive systems.  In 1984 the sociologist Charles Perrow published an influential book titled Normal Accidents: Living with High-Risk Technologies. Perrow argued that disasters will occur on a regular basis in technology-based systems that are complex and “tightly coupled.” Tight coupling is an engineering term that refers to systems in which a malfunction in any component will cause the entire system to come to a crashing halt—a multi-storied house of cards is tightly coupled, as is a row of dominoes—while complexity implies that specialized knowledge is needed to operate the system correctly.  Perrow offered nuclear power plants, aircraft, and large software projects as examples of complex tightly coupled systems, but he could equally well have used the cases of Knight Capital and the BATS and Facebook IPOs if he had written his book today.

Perhaps because it’s self-evident, Perrow neglected to mention a critical element in his theory of normal accidents, the key ingredient that causes complex tightly coupled systems to be prone to normal accidents: human behavior.  While technology has advanced tremendously over the last century, human cognitive abilities have been largely unchanged over the course of the last several millennia.  Therefore, technologies that leverage human abilities often magnify both positive and negative outcomes.  A chain saw allows us to clear brush much faster than a hand saw, but chain saw accidents are much more severe than hand saw accidents.  Airplanes allow us to travel much farther and faster than covered wagons, but an airplane crash almost always involves more fatalities than a covered wagon mishap.  And automated trading systems provide enormous economies of scale and scope in managing large dynamic portfolios, but trading errors can multiply at the speed of light before they’re discovered and corrected by human oversight.

The paradox of modern financial markets is that technology is both the problem and, ultimately, the solution.  The current financial system has reached a level of complexity that only “power users”—highly trained experts with domain-specific knowledge—are qualified to manage.  But because technological advances have come so quickly and are often adopted so broadly, there aren’t enough power users to go around.  Also, the growing interconnectedness of financial markets and institutions has created a new form of accident: a systemic event, where the “system” now extends beyond any single organization.  The “Flash Crash” of May 6, 2010 is an example, where a game of “hot potato” among high-frequency traders, hedging activity by slower-paced mutual funds, and a loophole allowing traders to post bid/offer quotes that were merely placeholders all conspired to create havoc.  For a brief period of time during the 20-minute interval from 2:40pm to 3:00pm on May 6, 2010, the Dow Jones Industrial Average dropped nearly 1,000 points, and the stock price of the world’s largest management consulting firm, Accenture, fell to a penny share.  These events occurred not because of any single organization’s failure, but rather as a result of seemingly unrelated activities across different parts of the system.  Each of these activities was innocuous in isolation but when they occurred simultaneously, they created the perfect financial storm.

The solution, of course, is not to foreswear financial technology—the competitive advantages of automated trading and electronic markets are simply too great for any firm to forgo.  The solution is to develop more advanced technology; technology so advanced it becomes fool-proof and invisible to the human operator.  The success of the Apple iPhone is not so much due to its marvelous technology (courtesy of Moore’s Law), but because it makes that technology so easily accessible to the ordinary user.  Even the least tech-savvy consumer can begin using an iPhone within minutes, and within a few days such an individual can do things that were previously reserved for power users.  Steven Jobs recognized better than most marketing experts that people don’t change their behaviors to suit technology as readily as they adopt technology that is suited to their current behaviors.  This is no mean feat. It requires a deep understanding of the limitations of existing technologies and how to create new technologies that can cope with Murphy’s Law.  Every successful technology has gone through such a process of maturation—the first VCR with its blinking “12:00:00” display versus today’s Tivo; paper road maps versus voice-controlled touchscreen GPS; and the kindly reference librarian versus Google and Wikipedia.  Financial technology is no different—we need version 2.0 of the financial system.

What would the financial system 2.0 look like?  The starting point is the recognition that it is, in fact, a system and must be managed like one.  Financial institutions can no longer take the financial landscape as given when making decisions, but must now weigh the ripple effects of their actions on the system and be prepared to respond to its responses.  Regulators can no longer operate in fixed silos defined by institutional types or markets, but must now acknowledge the flexibility created by financial innovation and regulate adaptively according to function rather than form.  And individuals can no longer take it for granted that a fixed proportion of stocks and bonds will generate an attractive return at an acceptable level of risk for their retirement assets, but must now manage risk more actively and seek diversification more aggressively across a broader set of asset classes, strategies, and countries.

But perhaps the most significant innovation of the financial system 2.0 will be to address the fact that while technology has advanced tremendously in recent years, human behavior has not.  It will be a financial system that isn’t predicated on the purely rational actions of Homo economicus, but one that recognizes the frailties and foibles of Homo sapiens by addressing Murphy’s Law as successfully as it exploits Moore’s Law.  As disruptive and disastrous as the recent series of crises have been, they’ve provided us with a wealth of critical information about the most important weaknesses in our existing financial infrastructure.  The tendency for financial institutions to become too big to fail, the tendency for politicians to issue government guarantees because the reckoning is beyond their term in office, and the tendency for regulators to look the other way when business is booming and no one is complaining are just a few examples of “bugs” that need to be fixed in version 2.0.  We know how to do it.  We just need to want to do it.

Technology is the reason the human race is the dominant species on the planet. We’ve managed to increase our numbers, extend our lifespan, and improve our quality of life all through technology.  But technology is often accompanied by unintended consequences: pollution, global warming, pandemics, and financial crises.  Financial technology can facilitate tremendous growth, but history shows that when used irresponsibly, it can lead to great devastation.  Let’s hope that the Financial System 2.0 will be more Moore than Murphy.