Reveling in NJ

The recent bankruptcy filings by the $2.4 billion Atlantic City, NJ casino Revel should be an eye-opener for those who continue to insist that a “recovery” is underway – especially in the commercial real estate market.

From ““:

Operators of the $2.4 billion Revel casino, which opened last April in Atlantic City, announced plans last week to file Chapter 11 bankruptcy in late March. The bankruptcy will help reduce its more than $1 billion debt and allow it to keep operating, reported MSN MoneyNOW.

According to the Star Ledger, the casino resort was an attempt to turn around Atlantic City’s staggering fortunes and keep gamblers from continuing to flock to casinos in neighboring Pa., which last year surpassed N.J. as second-largest gambling center in the U.S., behind Las Vegas.

New Jersey Gov. Chris Christie hailed the resort as a “game-changer” and it was therefore provided with $300 million worth of aid by the state. But he couldn’t have been more wrong.

New Jersey Gov. Chris Christie hailed the resort as a “game-changer” and it was therefore provided with $300 million worth of aid by the state. But he couldn’t have been more wrong.

The casino did not transform Atlantic City’s struggling gambling scene as hoped, instead it has performed poorly over the past 11 months. In January, Revel generated less than $8 million in casino revenue, ranking it last among the city’s 12 casinos, according to state data.

According to News 12 New Jersey, the pre-packaged bankruptcy will wipe away more than $1 billion of its debt by converting more than $1 billion of it into equity for lenders.

From in between Stamford, CT, where I reside, to New Jersey, I see large swaths of commercial real estate that either long been shuttered, currenly going out of business, or quite often the case, struggling to retain business and merely present convey a feeling of comfort and stability.

Perhaps the most paradoxical element of these observations is that new commercial real estate continues to be built – either in the form of strip malls, office buildings (in the suburban areas), or more expensive top-grade commercial space (in the case of New York City). It appears that commercial real estate developers and owners have taken it in the chin twice – first, during the onset of the Global Financial Crisis in 2007, then once again from 2010 onwards.

But there are two primary reasons for this, which together explain how the real-estate market is continuing to undergo distortions and disolocations from the “real” market that can only produce staggering tragedies such as the Revel casino. Firstly, many developers had expected a typical cyclical recession in 2009 that would quickly subside in several years. They therefore viewed the low prices at the time as an opportunity to create tidy profits in the coming years as the market recovered. Unfortunately, it appears that they held the artificial “stock market” in too high a regard and did not examine the “real” market for goods and services which did not exhibit such a favorable prognosis.

Secondly, governments and politicians who have been proclaiming a “recovery” have been quick to introduce stimulus measures and tax subsidies for these projects, which only serves to exacerbate the plight of the entire market. After all, commercial real estate participants who have not been sufficiently “lucky” or “qualified” (both read: connected) to receive political or government support will witness a transfer of their fortunes to those who have.

We unfortunately will see this game played for a long time along the Malthusian plateau of sustainability as hungry developers increasingly compete in the ferocious circumstance of an evaporating waterhole occasionally watered by a political waterhose.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at

You can visit his blog as well.


Why SNAP (Food Stamps) and the Baltic Dry Index are Excellent Measures of the Real Economy

Why SNAP and Baltic Dry are Excellent Measures of the Real Economy

by Srikant Krishna (

Economics and finance were subjects that I had originally neither possessed a modicum of interest in nor an iota of consideration towards. However, as I matured, and singularly focused my intellectual and physical aptitude on comprehending and solving the details of problems in seemingly unrelated fields, such as biophysics or computer science, I began to look beyond the horizons of the Protein Folding Problem, or understanding the Universal Turing Machine. As it such happened, I was recruited into Wall Street at the same time. To be thrust from a world of Laws based on first principles and experimental validation, into one of handshakes and opinion, was, well to be honest, rather enlightening.

Through my subsequent years in attempting to understand an exceptionally complex system comprised of not-so-rational human participants, there has always been one central question that I have found myself contemplating at the open and close of each day: How is the world, and in particular the United States faring? And I mean this from the point of view of individuals, families, companies, and organizations. I further wanted to obtain a concrete sense of the very abstract psychological characteristics of emotion, outlook, and prospects, across what is referred to as “the aggregate”. It is particularly easy, through the internet, to obtain a heteregenous battery of opinion from all parts of a spectrum comprised of similarly diverse background and experience of those who profess particular opinions. This however falls short of my requirements of a concrete, objective measure into an invariably complex, subjective topic. I therefore proceeded to spend years and then extending beyond a decade, as any serious investor, financial market participant, or analyst would have done, by poring over various theories, reflections, opinions, “thought pieces”, and so forth. I enjoyed engaging in this investigative research because it was a natural extension of my intellectual curiosity that had in the past been expended in other sciences.

The stated purpose of this article, however, is to demonstrate why the Supplemental Nutrional Assistance Program (SNAP) and (a component) of the Baltic Dry Index are the best gauges of economic comfort and activity. Let us not waste any more time, and instead address the principal components of this discourse. SNAP corresponds to the Supplemental Nutrition Assistance Program, and allow me the opportunity to access the online compendium of human intellect to introduce this program. The health of the American economy, is, without doubt, perhaps the most influential factor in assessng the health of the global economy. As an aside, I am elated that this specific article can avoid discussion of the European disaster, Japanese crisis, and hidden, or more aptly, “managed” Chinese freefall.

From Wikipedia:

“The Supplemental Nutrition Assistance Program (SNAP),[1] formerly known as the Food Stamp program, provides financial assistance for purchasing food to low- and no-income people living in the U.S. It is a federal aid program, administered by the U.S. Department of Agriculture, though benefits are distributed by individual U.S. states. They can be used to purchase any prepackaged edible foods, regardless of nutritional value (e.g. soft drinks and confections). Hot foods (such as those found in a supermarket deli) are ineligible, as well as items in fast food restaurants and similar retail settings.

For most of its history, the program used paper-denominated “stamps” or coupons — worth US$1 (brown), $5 (blue), and $10 (green) — bound into booklets of various denominations, to be torn out individually and used in single-use exchange. Because of their intrinsic value of 1:1 with actual money, the coupons were printed by the US Bureau of Engraving and Printing. Their rectangular shape resembled a US dollar bill (although about 1/2 the size), including intaglio printing on high-quality paper with watermarks.

In the late 1990s, the Food Stamp program was revamped, with some states phasing out actual stamps in favor of a specialized debit card system known as Electronic Benefit Transfer (EBT), provided by private contractors. Many states merged the use of the EBT card for public welfare programs as well, such as cash assistance. The move was designed to save the government money by not printing the coupons, make benefits available immediately instead of forcing the recipient to wait for mailing or picking up the booklets in person, and reduce theft and diversion. The 2008 farm bill renamed the Food Stamp Program as the Supplemental Nutrition Assistance Program (as of October 2008), and replaced all references to “stamp” or “coupon” in federal law to “card” or “EBT.”[2][3]

In the 2012 fiscal year, $74.6 billion in food assistance was distributed.[4] As of September 2012, 47.7 million Americans were receiving on average $134.29 per month in food assistance.[4] In Washington, D.C., and Mississippi, more than one-fifth of residents receive food assistance.[5]

This is fine and excellent, a social network intended to assist the millions of Americans in poverty. After all, as much of a “socialist”, “liberal”, “independent’, “conservative” or “libertarian” that others have claimed that I am, I also understand the fundamental truth that an individual cannot decide a priori where he or she is born. And infants, small toddlers, children, as such, are not responsible for their living standards. Let me be fully clear: I fully support helping these sorts of families and individuals. Let me be fully clear: I detest and castigate those who would abuse the system. The purpose of this article is not to debate the merits of the SNAP program, but rather, to employ it as a potent indicator of the state of the real economy.

SNAP numbers, in my opinion, are exceptional in two regards. First, they differentiate themselves from seasonally-adjusted, birth-death modeled, statistically smoothed employment numbers in the same way that a slab of steak is different from processed sausage. In other words, the number of Americans or households participating in the SNAP program is about raw an economic measure that one can obtain, miraculously having had passed through the statistical machinations of a very large bureaucracy untouched. In contrast with the plethora of economic revelations continually dispensed by this apparatus, the purity of a simple measurement can be held in high regard.

The second reason that the SNAP number has tremendous value is in its intrinsic ability to capture changes in real wages and consumer prices. A perennial exercise of the economic community is the computation of inflation, and this is rife with theory, discourse, conjecture eventually leading to statistical modeling that is increasingly divorced from the factual world. In the real world, as individuals and families begin to feel the pinch in declining living standards, their eventual recourse is to rely on public assistance for supporting basic needs such as food, housing, and medical care. Any perceived social stigma associated with receving this assistance varies amongst individuals and households, but we therefore can look at changes in the SNAP number as evidence of changes in the real, not modeled, economy.

Exhibit A, let us examine the recent history of the number of Americans on the SNAP program:


The SNAP chart indicates that an atrocious and alarming change in living standards has occurred and persisted from the mid-2000s onwards. Of course, with gasoline at nearly USD $5 in certain locales, this is rather easy to conceptualize. Perhaps given the “cyclical” booms and busts that we have been accustomed, nay, entrained, to, the initial deterioration should not have come as a surprise. Some would in fact, suggest that the period between 1978-1982 was just as intense and significant. However, the evidence that the trend has not reverted, and instead has persisted (though of a completely lower magnitude) should indeed be alarming. Governments and central banks around the world have continually proclaimed an astral “recovery” which has never manifested itself in the eyes (and bellies) of 50 million Americans. Six years after the initial precipice was encountered (I begin my enumeration from 2007), we are still experiencing economic stagnation and decline across the world, most notably in Europe, at the current moment. For how much longer can we pretend that this tectonic economic shift is “cyclical (read: transient and reverting) in nature rather than “structural” (read: permanent and continuing)? Is it not the moral responsibility of governments and other “entrusted” sources to at least make the populace aware of this transition, if not to correct it (as the causative forces are largely beyond their control at this point in history)?

We next will direct our attention towards global trade. Once again, to quote from the definitive, dynamic compendium of human thought:

From Wikipedia:

The Baltic Dry Index (BDI) is a number issued daily by the London-based Baltic Exchange. Not restricted to Baltic Sea countries, the index provides “an assessment of the price of moving the major raw materials by sea. Taking in 23 shipping routes measured on a timecharter basis, the index covers Handysize, Supramax, Panamax, and Capesize dry bulk carriers carrying a range of commodities including coal, iron ore and grain.”[1]

Most directly, the index measures the demand for shipping capacity versus the supply of dry bulk carriers. The demand for shipping varies with the amount of cargo that is being traded or moved in various markets (supply and demand).

The supply of cargo ships is generally both tight and inelastic—it takes two years to build a new ship, and ships are too expensive to take out of circulation the way airlines park unneeded jets in deserts. So, marginal increases in demand can push the index higher quickly, and marginal demand decreases can cause the index to fall rapidly. e.g. “if you have 100 ships competing for 99 cargoes, rates go down, whereas if you’ve 99 ships competing for 100 cargoes, rates go up. In other words, small fleet changes and logistical matters can crash rates…”[5] The index indirectly measures global supply and demand for the commodities shipped aboard dry bulk carriers, such as building materials, coal, metallic ores, and grains.
Because dry bulk primarily consists of materials that function as raw material inputs to the production of intermediate or finished goods, such as concrete, electricity, steel, and food, the index is also seen as an efficient economic indicator of future economic growth and production. The BDI is termed a leading economic indicator because it predicts future economic activity.[6]

Another index, the HARPEX, focuses on containers freight. It provides an insight on the transport of a much wider base of commercial goods than commodities alone. HARPEX is regarded as a Current-Activity Indicator, because it measures and charts the changes in freight rates for ‘container ships.’ Container ships typically carry a wide variety of finished goods from a multitude of sellers. These are factory output goods headed for retail markets, at the other end of the supply chain.[7]

Other leading economic indicators—which serve as the foundation of important political and economic decisions—are often measured to serve narrow interests, and subjected to adjustments or revisions. Payroll or employment numbers are often estimates; consumer confidence appears to measure nothing more than sentiment, often with no link to actual consumer behavior; gross national product figures are consistently revised, and so forth. Unlike stock and bond markets, the BDI “is totally devoid of speculative content,” says Howard Simons, an economist and columnist at “People don’t book freighters unless they have cargo to move.”[8]

And how has this figure, that has been claimed to be “totally devoid of speculative content”, fared in recent history?

Exhibit B, a chart of the Baltic Dry Index:


This chart indicates that a disaster has occurred in the shipping industry, with rates descending beyond the trough of the 2008-2009 crisis, and reverting back to a decade ago. As a friend of mine, who is a very sharp Federal Reserve economist, recently pointed out: since the BDI index value is merely a function of supply and demand, where is the evidence that the supply of freighters has not increased? In other words, how can we not attribute the sharp decline over the past years (subsequent to 2009) to an armada of freshly minted new freighters and tankers, replete with crew, eagerly entering the shipping market? The answer to this question lies in a comprehensive examination of the annual reports of the major shipping companies (in particular, DryShips, Eagle Shipping, and so forth) since 2009. The story that will unfold is rather sad, nay pathetic, but necessary for an honest investigation into the causes of the collapse in BDI. Regardless, the drop in global shipping rates to those comparable to 2001 (or earlier) cannot be ignored, especially in the face of fuel prices that have doubled or tripled over the same period.

In this article, we have examined why SNAP and the Baltic Dry Index are two excellent indicators of the real, rather than a perceived, modeled, projected, or propagandized economy. There is a very lucrative trade in information, which is conducted through the well-characterized medium of the internet, in which voice, images, software, or databases are transmitted digitally across the globe. A proper, dynamic, dataset compiled sans biases would be an excellent measure of the online activity of the world as a whole, but this would lack in conveying the “principal components” that a trader, investor, or asset manager would be interested in: a current quantification of a prominent nation’s ability to maintain living standards, or a measure of physical goods exchanged globally. We cannot rely on official government figures any longer, as these have been abused for decades and the collapse in their credibility has been well-documented throughout both the ethereal internet and varnished peer-reviewed literature.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at

You can visit his blog as well.

On the Diabatic Process and Standards of Living

On the Diabatic Process and Standards of Living

by Srikant Krishna (

In thermodynamics, an adiabatic system is one that is isolated with respect to the transfer of heat. An example are two thermally sealed flasks of coffee, one warm, and one cold. This is an adiabatic system because in theory, no heat can be transferred between the two objects. The two flasks will forever maintain their own temperatures. We contrast this with the diabatic system, which as the nomenclature suggests, permits the exchange of heat between the two components. In this case, we would insert a conductive metal pipe between the two flasks. Over time, we can then expect that the temperatures of the two flasks will achieve equilibrium.

Prior to the 20th century (and in particular, the new millennium), ideas, languages, agriculture, buildings, technological implements – in other words culture and societies – were separated by vast distances, often only reachable through long and arduous voyages. This is one of the reasons why the merchant import/export trade was extremely profitable, regardless of the commodity or good being exchanged. In fact, many of the earliest corporations were formed precisely to engage in these sorts of enterprises. Historically, regions such as South Asia and East Asia contributed the lion’s share of the world’s GDP, until the development of European financial and military technology, and the concomitant colonial expansion of these nations. And thus the highly prized spices, textiles, and crafted goods of the East became available to world. In Africa, diamonds, gold, ivory and sadly even human beings were the prized exports. The gold, land, and natural resources of the Americas did not escape the machinations of European colonialists either.

The world was a very large place indeed, with an incredible diversity of culture and natural resources. Nations and cities began to become highly specialized based on the type of culture, resources, and economic activity that was transpiring at a particular locale. The first several millennia of human history were replete with a multitude of empires, and city-states with such ambtions. From the Hittites, Assyrians, Persians, Greeks, to the Romans all successfully laid claim to lands and people that extended far beyond their origins. For the purposes of this article, we begin during the Age of Colonization period during which European cities very rapidly achieved a status as a center of global import/export, and hence banking and finance advanced at an expedited pace in these regions. Other parts of the world that were colonies, rather than the centers of colonialism, specialized in resource extraction, manufacture of certain goods, and agriculture. This implicit diversity, a fundamental consequence of colonialism, created profound disparities not only in the sort of economic activity that was sited in a specific locale, but a host of ancillary institutional and organizational structures as well.

For example, the European centers of banking, finance, trade and engineering demanded that a substantial portion of the population become skilled and trained in these economic activities. Correspondingly, European universities were responsible for an astonishing array of advancements, and were the leading world-class centers of higher learning, technological innovation, and organizational and management skills. It is interesting to note how European cities displaced the previous centers of trade, learning, and technology that existed primarily in the Islamic world during the Dark Ages that also engaged in somewhat similar forms of colonialism throughout the Middle East and North Africa. A further consequence of the European domination of global economic activity was that capital formation and retention was preserved regionally and nationally within the very same European centers. For example, if a well-networked group of investors or entrepreneurs sought to raise funding for their enterprises, regardless of the particular type of activity, then the most obvious sources would be likely situated in England, Italy, the Netherlands, France, Spain, and other major economic hubs.

The heterogeneity imposed by differential economic activity produced stark sociological disparities in living standards, life expectancy, education, and population growth. For example, a diamond miner from the African continent had virtually no access to education, technology, or capital to improve his standard of living, and therefore the entire concept of social mobility was virtually absent in most of the world except for the major economic centers. This is not to say that cultural factors did not play a significant role as for example in Japan or India, but the notion of an individual from a poor or middle class background achieving education, forming networks with the priveleged, and being able to live a much more comfortable life than his or her previous generations was an exceptionally rare circumstance except in the European circles – even of which this was an improbability.

Eventually, through revolutions, warfare, and the dissemination of certain philosophical notions regarding the role of individuals, and governments, colonialism and monarchism came to an end, and engendered the formation of many modern democracies, federations, and republics throughout the entire world. The industrial revolution dramatically made the world smaller with a host of developments, including the steam engine, automobiles, the telegraph, airplanes, and electricity. Unfortunately, however, as seems to be the case with any technological innovation, whether it serves as a benefit to human civilization or a detriment is determined by the end user. Steam engines, automobiles, airplanes and electricity enabled an extraordinary leap in the devastative capabilities of militaries. The period between 1914 and 1945 was one of the most brutal and horrific eras in human civilization. It was only after the devastating power of atomic energy was unleashed on entire cities that world leaders were forced to consider ramifications of war that included the permanent destruction of a nation. In this analysis, the Mutually Assured Destruction (MAD) protocol served as a rational, game-theoretic barrier that thus far has prevented the offensive employment of highly advanced thermonuclear weapons since.

But the world became much, much smaller. Supersonic flight, rapid rail transit, highways, telephone systems, and satellites rendered travel and communication to distant places no longer an exclusive privilege that could only be afforded by the affluent. And as the world was becoming smaller, massive waves of emigration and migration became possible, that continue to this day in many regions of the world. Socially, class mobility became the norm, and no longer the exception. From 1945 onwards, it appeared that each generation’s standard of living was expected to exceed the prior’s. Coupled with the August 15th, 1971 elimination of any constraint on the creation of credit, there literally was no physical or regulatory barrier that could prevent an ordinary citizen to become educated and through success, acheive a very comfortable or luxurious life. Well, actually they could just borrow the money and pretend to live a successful life, but that is a mere aside.

Three final technological innovations reduced the size of the world to roughly that of a (navel) orange. First, the internet made it possible to communicate large amounts of data at very high rates with virtually anyone else. The data could correspond to software systems, patient record information, live video feeds, electronic mail, or digitized speech. In essence, it permitted the transmission of any type of information from one individual or organization to another in a matter of milliseconds. Simultaneously, advances in cellular technology made it possible for one individual to directly communicate through voice or messaging with anyone else on the entire planet – a truly marvelous feat that would have been difficult to predict even a few decades earlier. In fact, the most basic cellphones today far exceed the transmitter devices employed by the original Star Trek crew. And finally, today, we have mobile devices that exceed utility as cellphones, but can access the internet, and are capable to connecting hundreds of millions of people together. The result of this has been astounding collaborative human accomplishments – I point to Wikipedia as the most prominent – an dynamically evolving record of the entire human intellect.

In today’s world, the differences in learning tools, educational and informational capabilities between the rich and poor, between the developed and undeveloped parts of the world, is virtually nil. For the first time in human existence, it is possible for the individual to be limited in their ability to acquire knowledge only by the expanse of their curiosity, and of course, sadly, time. Those chidren that are being born in privilige today will have to compete directly with their counterparts in any part of the world. The notion of retention of capital in a few favored cities, regions, or even nations is rapidly evaporating.

We begain this article with the considering of an adiabatic system, in which heat cannot be exchanged by the various components of the system. This was both the colonial and 20th century models. But the world of today is a diabatic system , and fundamentally very different. If we were to regard “heat” loosely as instead the flow of information, culture, and capital, it is clear to see that the process of wage arbitrage and relative changes in the standards of living throughout the world must continue until the system reaches equilibrium.

Dear reader, please consider this momentarily, as it is the crux of this precis. No law, regulation, or policy impediment can prevent the diabatic process from forcing the system into an equilibrium state. Waking up in Manhattan or Shanghai will result in similar experiences in the very human terms of living standards: goods available for consumption, medical care, technology, transportation, and so forth. This is not to say that vast geographic differences will not strongly influence the economic activity and quality of life – as they always have. But the world has become very small indeed.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at

You can visit his blog as well.

Why Interest Rates Will Asymptotically Approach Zero Forever (Part 2 – Specifics)

Why Interest Rates Will Asymptotically Approach Zero Forever (Part 2 – Specifics)

by Srikant Krishna ( )

In the previous post we had set the stage after the largest credit (read: debt) expansion bubble in the history of the world. Affluence truly seemed both eternal and contagious, changing lives and the standard of living around the globe. Central banks aberrantly contemplated that they firmly had control of interest rates, and through it the price of time, and in transitive sequence, a general manipulative mechanism on economic growth.

But in reality, they were absurdly incorrect. The global financial crisis has at its roots many causes, but the capability can be assigned to two very basic human characteristics: incompetence and greed. Whereas politicians all over the world were making promises that could simply not be funded, primary dealers of bonds were more than happy to comply by funding deficits across many developed nations. And central bankers, ostensibly a component of the incestuous revolving door apparatus that conjoins politics, academia, policy institutions, central banking and investment banking, suffered from exceptionally pathological delusional incompetence.

As individuals, corporations, and finally governments attained the economic analog of the Malthusian carrying capacity in terms of debt and the ability to service it, the global credit and financing systems began to collapse, and with it, a massive indiscriminate delevering across the board produced what is known as the Global Financial Crisis. After all, a debt based monetary system requires both the principal and interest to be repaid – or in another words, a perpetually exponential growth in the issuance of credit. If, by whatever reason, endogenous (war, depression) or exogenous (natural disaster), the credit issuance mechanism were to be retarded, the funds required to pay the interest component of debt service would not be available (and perhaps neither the principal), and the entire system deflates back to reality through an endless series of defaults.

During this period, U.S. Treasurys attained extremely low yields, and remained that way for the subsequent years. Investors felt the safest instruments to park their assets were U.S. Treasurys, backed by the full faith and credit of thousands of nuclear warheads, hundreds of foreign bases, and a very aggressive and cogent policy, driving the yields down on these instruments. Additionally, coordinated central bank action across the globe resulted in massive backstopping of otherwise insolvent financial institutions through a combination of mechanisms including asset purchases, repos, and lowering of both the EFF and discount window rates in the United States and their equivalents abroad.

Europe, however, was beginning to fall apart at the periphery (even though the core itself has been rotting for quite some time now). Some modicum of recovery seemed to finally occur in the summer of 2009, but this was short lived until precisely the Greek chaos that ensued in the spring and summer of 2010. The situation continued into 2011, with the collateral damage including the ablation of MF Global and “rumors” of pending calamity at Jeffries. It was the ascendancy of Mario Draghi to the throne as the head of the ECB, and an extraordinarily determined European bureaucracy that finally compressed PIGS yield spreads from an inevitable fiscal collapse. But I hear that a substantial portion of the populace is burning freshly cut forest wood for warmth this winter…

The purpose of this pair of articles is explain why interest rates will remain low forever, and asymptotically reach zero (and a concomitant negative real yield). There are three required factors that would elicit, sustain, and promote this phenomenon indefinitely in the absence of popular and social abrogation: the mathematics of bond yields, the unsustainability of fiscal provisions, and the intervention of central banks. Let us now explore in detail each of these requirements and contributing factors to the zero interest rate environment that has become the new normal.

From a basic mathematical perspective, with the intuition and practice that can be attained in grade or high school, we can understand that the yield on a bond instrument can be represented as k/x, where k represents the original payments in the interest service, and x is the price of the bond. Visually, as the price of the bond increases, a plot of the yield versus the price becomes extraordinary flat. Mathematically, as the price of the bond (x) increases to very high levels, the derivative, or rate of change, of this function (-k/x^2), approaches zero. The largest rates of change of yield with respect to price is when the price of the bond is very low. This is one of the reasons why in the summer of 2011, yields in peripheral European countries were exploding. However, once the price of the bond is forced to be sufficiently high, either by endogenous market participants (funds), or exogenous factors (central bank intervention), it is exceptionally difficult to influence the yield based on the simple aforementioned relationship. Therefore, it only required that the ECB, BoJ and the FED push bond prices to a certain level whence it becomes absurdly facile to maintain low rates.

Secondly, if bond yields were to increase, many so-called “developed” economies would become insolvent overnight. The cost to service the debt would far exceed the income generated by taxation revenue for these sovereigns. Personally, I believe that the central banking system frankly does not commend this sort of fiscal subterfuge and immorality, but is forced to participate in the suppression of rates for purely political purposes. Let us be honest, and state that central banks today are far less “independent” than they insist, and are increasingly branches of fiscally insolvent (read: inept) governments. After all, when presented with the scenario of a standing army of soldiers and domestic police demanding promised benefits, versus a small group of highly educated gentry in suits, my expectations of the superiority of enforced subjugation certainly lie with the former group. Yields cannot increase without corresponding widespread social chaos, and therefore will be suppressed by an increasingly politicized banking system.

The final reason that bond rates, will in perpetuity, remain low for “certain developed nations” has to do with the practical basis underpinning fiat currency. This topic has been well discussed both in the real world and the ethereal internet, so I will neither profess any particular bias nor explanation. But in the context of examination of factual evidence, given the previous two reasons, all that is required for central bankers (or primary dealers and equivalents) to retain or suppress interest rates is to simply to act as marginal buyers. It is purely this simple. Given the mathematical relationship between yield and prices, and the political incentive, nay charter, that is extended to certain financial institutions, all that is required is a slight excess of demand (buyers) over supply (sellers) to forever maintain the low interest rate requirements. But unlike hedge funds, retail investors, sovereign wealth funds, and even investment banks, the singular capacity afforded to those “privileged” nations with a central banking system will ensure that bids can be conjured from the void, and digital funds are transmitted within femtoseconds, immediately leaping into the commodities and “risky” asset markets. Good luck with your USD $6 chai latte.

As the real economies across the globe continue to decline, and the indulgent promises afforded during the extravagent years come to fruition, it is the confluence of these three powerful characteristics that will ensure that yields remain low. Until of course, social or natural phenomena disrupt the engrained charter than has not been sapiently decided, but rather forced, upon swaths of the world.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at

You can visit his blog as well.

Why Interest Rates Will Asymptotically Approach Zero Forever (Part 1, Background)

There has been much discussion and chattering in the financial and mainstream media regarding the possibility that interest rates may increase because of reduced action by th FED. Across the Atlantic, investors have been mystified how countries experiencing upwards of 50% youth unemployment can retain low sovereign debt yields simply due to the actions of a one Mr. Draghi. Across the Pacific, a large number of investors will shortly lose their shirts betting on higher JGB yields.

Low interest rates are a necessary and predictable consequence of the massive credit-fueled bubble conceived on August 15th, 1973 when the dollar was no longer constrained by a gold standard, even on an international basis. Since that era, the wizardry, nay sophistry, that is finance has allowed credit (a more polite expression of debt) to be created at an unprecedented rate. But the miracle of finance relies on one fundamental principle: debt is borrowing the future. So, relatively quickly individuals began to finance their vehicles over several years, students accrued student debt to be paid back over their lifetimes, and households suddenly began to procure homes through 30-year loans. This is the essence of credit: the borrower trades their future production for current consumption.

The availability of cheap credit elicited an enormous growth in asset prices across the board during the past forty years. Wall Street fared the best, with a stock market that seemingly was to return 8% or more in perpetuity. In fact, many annuity policies and pension plans were structured on this flawed assumption. In prior decades, stocks an bonds were reserved for the affluent. The tremendous growth in the availability of credit, in combination with the inception of the IRA, drew a large swath of the population into investments in financial instruments that they never would otherwise done. The money instead would have saved at the local bank, which would have lent to the local community businesses and so forth. Instead, the sudden infusion of cash flowed straight into the so-called money center banks, which then proceeded to use the proceeds to engage in more sophisticated casino games known as derivatives and structured products.

But just as both individuals and corporations were prone to very fundamental long-term errors due to the availability of easy credit, governments also fell prey to deceptively simple borrowing. Balanced budgets became an extinct creature, and slowly but surely, Washington, Tokyo, Madrid, and corresponding state and local governments expanded in size and scope. Furthermore, pension plans for government workers were established based on defined-benefit pink-unicorn 8% per annum asset growth projections. Cheap credit enabled the West, which had extremely efficient credit generation and distribution systems in place, to eventually win the Cold War against a system in which borrowing the future was no simple task, and one that would have been regarded as mystical to not-well-connected.

And the cracks were starting show early onwards. The constant pressure imposed by leverage and borrowing began to take its toll in the 1990s, with issues such as wage arbitrage and NAFTA being the explosive economic choices that corporations and countries had to grapple with. After all, when the bills started coming in, the only way the maintain an identical lifestyle was to reduce costs. And so foreign manufacturing and services quickly began to transform the workforces of so-called developed economies into systems that relied on the exchange of capital rather the production or real goods or services. Simon Johnson, in his May 2009 article in The Atlantic, “The Quiet Coup” summarizes this process in great detail and with an excellent use of the English language.

It was a singular and revolutionary innovation that postponed the end of the debt bubble and produced real and measurable economic growth: the Internet. This technological revolution extended the “good years” for another decade, until the early 2000s. But there was a problem with the new era ushered in by this advancement: it made global wage and price arbitrage much easier to accomplish. Whereby before computer code may have had to been transmitted by satellite using proprietary protocols, fiber optics and ethernet changed the picture. An order for factory goods no longer required faxes or snail mail. It became trivial to communicate with anyone, anywhere, anytime around the globe.

And there lies the basis of the dilemma. In a world where everyone is on near equal footing, how can there be justification for one country or region to afford a better lifestyle than any other? The short answer is that there is none, and Generation X, Y and the millennials are feeling the impact of not having to compete with the two hundred students in a high school graduating class, but rather two hundred million students from around the world of the same age who are a simple air flight away from attending the same university or obtaining the same job.

So, against this painful backdrop for developed economies, the crows are suddenly coming home in droves to roost. In the second part of this article, I will describe in detail why it follows that interest rates will remain low forever.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets. He grew up in Holmdel, New Jersey, New York City, Boston, and currently resides in Stamford, CT

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at


Capturing, Storing and Backtesting CTS/CQS Tick Data in C#/.NET

Tick data is the lifeblood of the capital markets. Unlike order book data, which can be stuffed, stale, and away from the inside market in the majority of cases, tick data represents actionable quotes and transpired trades that can be regarded as the “principal components” of capital market data. Within tick data, one can measure volume, quoting frequency, spreads, VWAPs, moving averages, volatility, and so forth. This article therefore emphasizes the capture and analysis of tick data as opposed to order book information, which can be loosely defined as orthogonal in certain respects.

There was once a time when even the attempt to capture and record tick data, specifically the CTS/CQS “tape” from the U.S. equity markets, was a sophisticated process involving a team of individuals. Even more highly regarded was the replay/analysis/backtesting of the tick data. This was often conducted only in the realm of investment banks or hedge funds.

I briefly, without code examples, want to describe how I effectively store, record, analyze, and backtest the “tape” easily and efficiently each day as part of my model construction and trading strategy deployment.

On average, the CTS/CQS produces about 30GB of information, plus or minus a few GB depending on precisely what fields are stored. I attempt to store everything (condition codes and such), and so my files tend to be a little bit larger. I receive the tick data through multicast UDP, and I proceed to immediately fire an event that strips it off of the network buffer and throw it onto a separate queue in memory. This is so as not to lose data during periods of intense volume (open, close, FED announcement), and so forth. Once it is in my in-memory queue, I then proceed to write each tick, represented as either a trade or quote. I use a common class to represent both trades and quotes as there are a lot of characteristics that are shared and useful between the two.

I begin recording at 09:00 each day (for the possibility of algorithmic “pre-market” analysis), and stop at 16:20. The roughly 20-30GB files are then compressed into a .gz format using standard software such as 7-zip and so forth. The original files are then discarded, and the compressed files are transferred over to my Microsoft Azure Cloud Storage account. I invariably can compress the files to 10% of the original size, or roughly ~2.5GB to ~3.5GB

I then download recent updates on a period (weekly) basis and distribute them across all my backtesting/analysis servers. I then replay the tick data by using the C#/.NET built-in uncompressing reader. Keep in mind that as each tick is being uncompressed, it is placed on a queue an and event is fired that processes the tick throughout my backtesting system and strategies. Therefore, I usually have 6 cores operational on a dual Xeon 8-core server at any given point. Backtesting a single day only requires a few minutes (depending of course on the complexity of the strategy), and then the entire set of trades and messages over the backtesting period is serialized and stored as a “Model” object.

I have created a WPF viewer for the model that displays the market data and various transformations (differencing, moving averages, volume, cumulative volume, quote frequency, and so forth). I use the Visiblox package to greatly facilitate this, and I include annotations on where I’ve placed my trades so I have a visual sense of the strategy. Additionally, because I have the full Model characteristics, I can compute various performance measures against the backtest (Sharpe ratio, annualized return, and so forth.).

Now, the entire process I described is necessary because I using machines with only 12GB of memory. Each day’s worth of compressed CTS/CQS data is approximately 3GB. If I had access to a 64GB or 128GB machine, the backtesting procedure would be far quicker as I could load and entire month or two worth of data into memory and never have to access secondary storage (be it a HDD or SSD).

My current project is to move the entire backtesting apparatus onto the Microsoft Azure platform, so that I fully avail the “utility computing” model and backtest day and night with literally unlimited resources. As the trading volumes have decreased, it actually facilitates backtesting using home-grown software. That is another reason why I develop fully on the Microsoft stack – everything just “works” together, without headaches of which version of Linux I’m using and so forth. But’s that’s just a personal aside.

The gold standard, in the final analysis, for these sorts of systems is of course KDB+, which is incredibly fast and powerful. It is an in memory database with an exceptionally brilliant design and comes with its own extremely concise language (q). But, since I’ve been a freelancer, I’ve had to develop my own techniques for managing large amounts of tick data.

I hope this article is useful to other financial technologists who regularly record and analyze capital market data.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at

In Defense of Automated Trading

In Defense of Automated Trading

In the past several years, there has been an onerous amount of invective hurled at “high-frequency trading”, “algorithmic trading”, and their synonymous brethren. The acerbic consideration afforded to this style of trading has been particularly exacerbated by the conflagration that is the global financial crisis, with no end in sight. A large part of the criticism originates from non-practitioners, not only in the capacity of automated trading, but with respect to capital markets in general. Even specialized financial media sources such as ZeroHedge interminably condemn both the electronic trading apparatus and those market regulators who face a daily supervisory penance.

As is the case with most aspects of the social world that is touched by technology, trading in the capital markets has experienced a thorough revolution, nay series of revolutions, over the course of the preceding decades. Should we be so bewildered that trading systems incorporate technological developments such as many-core GPU processing, reconfigurable hardware, and in-memory databases? Is it so mystifying that the ubiquitous (and relatively obese) financial services industry avails services such as co-location, data-mining, satellite communication, and microwave transmission? After all, a visit to the theater to watch the latest animated film, or to your local healthcare facility to procure an MRI will each demonstrate the same pervasive technological transformation.  Why must institutions trusted with the vital task of growing investors’ capital be precluded from availing cutting-edge methodology and devices?

It has become an invariant that the landscape of computerized trading systems is lumped into an opaque, monolithic entity that few on the planet seemingly comprehend. This perspective however is simply wrong, and masks the tremendous diversity and specific roles that the landscape actually details. For example, “high-frequency trading systems” which are usually operated in the context of a proprietary broker-dealer or market-making platform with direct connections to the exchanges are very different beasts than an algorithmic trading system, operating in an 

agency capacity at a broker-dealer, but also with direct connections to the exchanges and harboring a similar assortment of supporting tools and technology. Each of these, in turn, is different from the extensive use of computers in quantitative modeling and trading, often performed by a hedge-fund. This simple non-exhaustive list of automated trading examples reveals the diversity in purpose, timescale, and methodology employed by the trading firm operating the entity.

Let us first distinguish between what facets and consequences of automated trading should be important to the non-participants, and what should not. Firstly, let us immediately disregard the magnitude of (potential) profits experienced by these firms. In a world of “expert networks”, calamitous multi-billion dollar CDS bets, and nefarious manipulation of global interest-rate benchmarks, this avenue of criticism simply cannot be maintained with any modicum of integrity. The two most important aspects of the market from a participant’s point of view should be transaction cost, and price discovery.

From the perspective of transaction cost, it has never been cheaper, faster, and more convenient to execute trades in the U.S. and various other global capital markets. Whether from a web browser or a mobile device, placing a trade and receiving the execution can transpire in a matter of seconds for humans. Compare this with having a call or discussion over the telephone with a stockbroker, who then would have to contact their trading desk or floor brokers, and after a chaotic exchange that occurs through an outcry market (or even an electronic matching system controlled by a specialist or market maker), fills and executions are finally disseminated back to the client. Of course, a phone call, voicemail or snail mail would be the method of conveyance this transaction. Contrast this with the millisecond response on a simple handheld device, and instant and powerful accounting features. The reason that this is possible is because of technology, the same technology that is being castigated as an evil to be eliminated. Another component influencing transaction cost is the actual commissions, either direct or indirect in the form of markups and spreads paid to dealers. There was once a time when being a NYSE specialist or a Nasdaq market-maker at a leading firm was a highly coveted role, often bringing in bonuses on order of several $million per annum to those lucky
few. In fact, war stories of complete unethical rip-offs of clients were copiously disbursed within trading circles at the evening session around the bar or dinner. It is because of these widespread unethical practices such as frontrunning, principal trading by specialists, fading away of quotes, expansion of spreads, and so forth that order handling rules had to be implemented, and decimalization was invited by the buy-side participants. Perhaps the culmination of the buy-side’s (and other market participants’) angst towards this process was the filing of the lawsuit against NYSE by Calpers in the early 2000s. But the writing was already on the wall at this point for the high-flying specialists and market-makers, as sell-side algorithmic and program trading began to seriously eat into their monopolized businesses. At this point, I must interject and state rather openly that I find it amusing that the most vocal opponents of “machines” and “algo trading” were often these very same individuals who were maintaining exceptionally comfortable livelihoods to the detriment of mutual funds, pensioners, and small retail investors. To reiterate, never in the history of the global capital markets has it been more efficient and cheaper for any institution or human to transact in the public capital markets (at least pertaining to liquid instruments). If a firm is engaged in procuring highly exotic and complex forwards or derivatives, let it be said that they are the mercy of their counterparty or dealer, and they may simply be digging a new grave at a different casino table.

The second fundamental aspect from a market participant’s perspective is the notion of price discovery, which is to ask whether the market price of a particular instrument represents the true sentiment amongst all the participants as a whole. Counterexamples of this are the virtually daily millisecond “flash crashes” which occur in various stocks, often independent of characteristics such as liquidity or volatility. These indeed occur because the speed in which algorithms operate, and the inability of various machines to distinguish very atypical market conditions, creates positive feedback loops that result in small time interval “catastrophes”. There are three important points that pertain to these phenomena. Firstly, the time intervals are so small as to be rather irrelevant to human traders. If a flash crash occurs on the order of half a second, I’d be rather impressed if a large number of human traders were to be affected during the physical process of placing the trade. Secondly, the exchanges have been very accommodative in busting the executions during those particular time intervals. Thirdly, the “flash crash” (or inverse price movement), is temporary, and reverts back to a non-absurd price relatively quickly. There is a dearth of permanent price impact produced by the cascade of erratic machine behavior and stop triggers. In fact, there are strategies that seek to profit precisely from these sorts of crashes, by quickly buying into extreme dislocations in price. As was the case for transaction costs, if buy-side firms or market-participants attempt to “defect” in a prisoner’s dilemma by using dark pools or electronic liquidity providers (ELPs), then they run the risk as of being buried in a different grave, away from the quoted markets in which there is a semblance of discipline and transparency. This is not to say that “trading in the dark” is necessarily a dangerous notion; however just like the complex derivative example it is necessary for the market participant to understand the advantages and risks involved in doing so.

It has never been easier to engage in the capital markets. As I’ve traveled through the developing economies of Latin America and Asia, I see a tremendous difference from my prior visits in the market knowledge that is pursued by that young professionals who are earning money to invest. Wall Street has always been rife with bubbles and busts, without exception, throughout history. This, in fact, is true of any global marketplace. We have observed unethical and nefarious behavior at very high levels, involving complex instruments, illegal information dissemination, very large notional values, and outright fraud. But these evils employ centuries-old implements of deception and accounting gimmicks. Trading volumes have greatly decreased, and the residual volume is often machines trading against each other. Exchanges, in fact, would probably go out of business were it not for these automated trading systems attempting to gain tiny profits at a very frequent rate. To suddenly shift the blame towards technological advances that are, undoubtedly double-edged swords, and the talented people that attempt to learn and profit from them, is an argument that simply cannot be maintained vis-à-vis the broader picture.

Copyright © 2013, Srikant Krishna

Srikant Krishna is a financial technologist and quantitative trader. He has a background in biophysics, software development, and the capital markets.

You can follow him on Twitter @SrikantKrishna, and on LinkedIn at, or e-mail him at