FMV PART 4: FAIR MARKET VALUE IN BANKRUPTCY

fair market value

The primary thesis of this blog series is that assessing fair market value in bankruptcy must now account for general economic conditions, due to Federal Reserve distortions of fair market value through unlimited monetary expansion and the unavoidable influence of global derivatives trading markets.

In Part 1 and Part 2 we addressed the mechanisms of value distortion due to Fed policy and the derivatives trading markets. Yet even with these distortions of fair market value, the standard valuation models used in bankruptcy courts and other civil courts should continue to work reasonably well, provided valuation experts and courts adapt the valuation models to more fully account for general economic conditions.

In Part 3, we pointed out the failings of traditional calculus-based valuation models to correctly value events that supposedly have a “once-in-a-century” probabilistic chance of occurring–but yet yet seem to occur every few years in our current economy. If traditional estimates of value (such as the Black Scholes valuation equation) are not entirely up to the task in today’s economy then where do we look for additional guidance?

Clearly, our legal system will continue to rely on the standard valuation metrics outlined by Chief Judge Christopher S. Sontchi, insofar as possible. But we also must be realistic about increased risk to every asset class as a result of the extraordinary impact of Federal Reserve policy and the global derivatives trade. So where might we begin in tackling this task? We suggest a careful examination of complexity theory is a good place to start.

A BRIEF INTRODUCTION TO COMPLEXITY THEORY

The basic premise of complexity theory is that there is a hidden order to the behavior (and evolution) of complex systems, whether that system is a national economy, an ecosystem, an organization, or a production line. For example, complexity theory is routinely used to estimate the direction and velocity of hurricanes–attempting to account for the large directional changes that may occur with small inputs. Nowadays, meteorologists can reasonably predict the direction and velocity of a hurricane within a reasonable range and time window. Weather events are, of course, classic examples of complex dynamical systems.

AN EXAMPLE OF COMPLEX SYSTEMS ANALYSIS IN THE REAL-WORLD ECONOMY

Proof of how to correctly model complex dynamical systems in the physical world was demonstrated by CalTech professor Carver Mead in his groundbreaking work in theoretical physics and electromagnetic properties. Prof. Mead demonstrated, first theoretically and then experimentally, that the flow of electrons in a physical system cannot be modeled as probabilistically-independent events, but can only be understood as a collective dynamical ecosystem.

The proof in the pudding of Professor Mead’s radically-different approach came in the design of computer chips by Intel Corp. that followed his theory of collective electrodynamics. Prof. Mead applied his theoretical and experimental work in the real-world economy, making Intel Corp. a fabulously valuable company in the process. Working with Intel Corp. co-founder Gordon Moore, Prof. Mead first coined the term “Moore’s Law,” famously known today as describing how the speed of computer chips doubles every 18 months. Only by following Prof. Mead’s calculations was Intel able to continue making computer chips smaller and more efficient.

This is a spectacular case of theoretical physics demonstrating the truth of its theories in a real-world application that has, literally, changed the entire world. In Prof. Mead’s case, the old-school Maxwell’s Equations, which assumed electrons acted independently and proabilistically based on a normal distribution, failed to account for real world conditions. So Prof. Mead went “outside the box,” observed the electromagnetic properties of electrons in real life, then modified his theory based on actual observations–observations that work today, and have helped make Intel a fabulously valuable company. (interestingly, Prof. Mead still has skeptical critics in the world of physics, who insist that the world is exactly when they learned at university–in spite of the obvious proof to the contrary.)

COMPLEXITY THEORY MODELING IN SOCIOECONOMIC SYSTEMS

Socioeconomic systems are also complex dynamical systems. In fact, it is arguable that complex social economies are among the most complex dynamical systems observed in the world today. The trick, of course, is finding an accurate way to express this behavior in a financial model that can be relevant to an asset valuation. Complexity theory helps us do this far more accurately than calculus-based models. This work is being done today by pioneers Doyne Farmer and Norman Packard (formerly of the Santa Fe Institute) as consultants for hedge funds and trading houses. While Messrs. Farmer and Packard have refused to disclose how much money they made using trading strategies based in complexity theory, they have said that returns from the early 1990’s to 2018 were “substantially above market,” with only one losing year during that period (2007).

COMPLEXITY-BASED MODELS VS. CALCULUS-BASED MODELS

A good way to understand the benefits of a complexity-based analytic method is to compare and contrast it with methods based in calculus. Then the differences in approach become apparent. We start with the data used in both calculations, time-series data.

Both complexity theory and calculus begin with time-series data. A time series is a series of data points listed or graphed in time sequence. For example, price movements in a publicly-traded stock are a classic example of time series data. The time series captures the snapshots of exact stock price changes at an exact moments in time. Price is typically tracked on the “y axis” of a graph, and time is typically represented on the “x axis.”

Time series movements form the foundation for financial analysis under both complexity theory and calculus: so far, so good. The differences arise in how this time series data is aggregated, analyzed and interpreted. We briefly discuss the differences in the two methods, as follows:

ANALYSIS OF DYNAMICAL SYSTEMS — THE CALCULUS METHOD

The calculus analytic method (such as the Black Scholes equation) treats individual data points (or matrices of data points) in a financial system as probabilistically-independent of one another, just as Maxwell’s Equations had (erroneously) treated the movement of electrons in a physical system. That is, Black Scholes assumes each occurrence in the financial economy is a probabilistically-independent event that has no direct bearing on other data points (observed occurrences) in the economy. And like the error-prone Maxwell’s Equations, Black Scholes also assumed a normal (Gaussian) distribution of all observed events in a manner approximating a “bell curve.”

While simplistically appealing (which makes modeling easier) neither of these assumptions holds up in the real world of complex systems. (Just as Maxwell’s Equations failed in design of computer chips, Black Scholes failed to anticipate, and was principally responsible for, the economic crash of 1998 in Latin American countries due to failure of the hedge fund Long Term Capital Management).

SO…WHAT’S WRONG WITH THE BELL CURVE?

The normal distribution, or bell curve, is something we are all familiar with from our school days; that most students will fit an average profile, with some below average and some above average (well, everyone except students from Stanford or Yale, who are assumed–probably correctly–to all be far above average). A “normal” distribution is then created according to standard distribution levels to the left and right of the mean (average), with the vast majority of students falling within 1 or 2 standard deviations from the mean. These assumptions build out a nicely-formed bell curve.

The same goes for the likelihood of certain events in the economy using models such as Black Scholes: the dominant expectation of events will center around the average distribution in the bell curve, with some expected events disbursed to the left and right of the bell curve. Some considered to be so statistically unlikely–so far out on the bell curve distribution–that they are ignored for modeling purposes. These are the proverbial “long-tail” or “black swan” events.

And this is the fundamental problem with using models that assume a natural distribution to analyze a complex dynamical system like an economy or financial ecosystem. In an actual complex system, there is really no such thing as an average economic actor, or average event, or average response to an event, that can be extrapolated into a bell curve representation using standard deviation assumptions. This isn’t actually how complex dynamical systems work in the real world.

ANALYSIS OF DYNAMICAL SYSTEMS — APPLICATION OF COMPLEXITY THEORY

Complexity theory does not assume a normal distribution of data, does not use standard deviations or probability calculations and rejects the (erroneous) idea that observed data-points are statistically independent of one another. Instead, complexity theory assumes data-points actually depend on one another, and movements in some data-points will affect the entire ecosystem as a whole. In this way, complexity theory analyzes the ecosystem as a whole, correlating the inter-relationships among data to reveal repeating patterns and attractors. These patterns and attractors reveal a predictable system state of the entire dynamical system.

The mechanics of building an analytic model using complexity theory look something like this:

–Start with Time Series Data. Time series data is any data that shows a snapshot of movement of the variable being measured at any moment in time. The example of time series data most familiar to everyone is probably a simple stock chart. Plotted on a simple graph with an x | y axis, a stock chart shows share price movements over time. Hence, this is called “time-series” data.

In complexity theory, a single stock chart–showing share price movement as a function of time, for example–might be an important variable for purposes of our analysis. This one variable (share price movement over time) is referred to as a single dimension in the complex system. In complex systems analysis, this single variable (share price as a function of time, measured on an x | y axis) becomes just one of many variables (dimensions) that are first individually measured, then computed in parallel to see patterns of inter-action among the numerous variables in a complex system. But the measurement method is always starts the same way: the capture of time series data for specific variables deemed important for our analysis.

–Strive for High Dimensionality. High dimensionality (numerous variables) is a critical requirement of complex theory analysis. Only if multiple variables can be identified and processed simultaneously, in parallel, can the interactions among variables become visible, thereby revealing the patterns and attractors we are trying to identify for predictive purposes. Historically, computational power was limited, so analysts were required to compute a single variable at a time, or a few variables simultaneously, at most.

But nowadays, in 2020, cheap computing power for massively parallel computations is readily available. For example, AMD Corp. now produces its Radeon product line designed to accelerate deep learning and high-performance computing/GPGPU applications. GPU processors can compute a massive number of simple computations in parallel, as is required for high-dimensional complex systems analysis. Key to complex analysis, data dimensionality is not reduced into matrices of independent data-points (a technique used to make big data fit calculus-based models) but rather is expanded to the computational limits of the GPU processors / algorithms available, which is substantial nowadays.

–Simultaneously Process all Dimensional Data. The advantage of newly-developed GPU processors, introduced above, is the ability to process massive amounts of data simultaneously using parallel computing techniques. This simultaneous processing of data is what reveals patterns and attractors in the time-series data variables used for analysis.

-Derive Findings and Conclusions. The goal of complex analytics is to identify the patterns and attractor poles that appear in graphs of the data. In the context of valuation analysis, this data will help reveal, for example, the propensity for economic “black swan” events that can drastically undermine the reliability of an asset valuation.

Many valuation experts have historically attempted to account for broader economic conditions using a very limited number of variables (such as stock indices or interest rates), but typically have not account for statistically unlikely events–i.e., events that statistically deviate too far from the bell curve mean. But in today’s day and age, recurring economic crises are no longer remote statistical deviations–they occur every 8 years or so.

The primary reasons for this, as addressed in Part 1 and Part 2 of this discussion series, are market distortions caused by Federal Reserve policies and the massive global derivatives trade that now forms the very foundation of our financial system. Because disruptive events caused by these market distortions are reality in our economy in 2020, we must do our best to account for them in order to arrive at fair market value that will actually mean something in the months and years following the valuation date.

So what variables in the real world might valuation experts examine to apply complexity analytics to real-world valuation problems? We turn to that question next.

Models for Assessing Fair Market Value in Bankruptcy

We continue to assert that the standard valuation models for assessing fair market value in bankruptcy, and other civil courts, can continue to work acceptably well, even in times of asset value distortions, as long as a correct attention is placed on global economic conditions. This requires courts and valuation experts to account for the impact speculative derivatives may have on the value of an asset and asset class. Choosing the specific variables to analyze in understanding this problem must be determined, or course, by the valuation expert in each case. But a few of the important variables to consider, in our view, are:

-Credit Market Data. Credit market data is available from numerous sources, since it is such an important element in today’s economy. For example, the St. Louis Branch of the Federal Reserve publishes extensive data in credit markets represented across numerous categories. Additionally, Moody’s and Standard & Poors (and other similar rating agencies) track and rate corporate credits that directly influence prices and yields on corporate bonds. There is truly a mountain of corporate credit data available for analysis in any particular case.

-CDS Market Data. Credit default swaps are key early indicators of problems in a company, an industry or a country’s economy. Early changes in CDS prices often presage later changes reflected in the credit markets and by rating agencies. IHSMarkit is the go-to source for CDS pricing and related information.

–Repo Market Data. In a “repo” transaction banks (including central banks) temporarily “sell” assets (typically government securities) to other banks or hedge funds, with an agreement that the buyer will “sell back” the asset in a short period of time–often overnight. This is typically done to temporarily increase (ie., manipulate) the buyer’s balance sheet to show a higher asset value needed to meet regulatory requirements or borrowing collateral requirements with third parties. The repo market is a great early indicator of the state of an economy since large banks (repo lenders) typically have credit information on borrowers not yet available in the general market. The US repo market is managed by the New York Branch of the Federal Reserve, which makes some repo data available.

–Equity Buyback Data. Share buybacks by select public companies have been the primary driver of stock market increases in the past few years. Equity buybacks are announced by each company in advance of an offer, and therefore publicly-available information is readily accessible through SEC filings. The vast majority of share buybacks in recent years have been funded by corporate debt, making this valuation metric an important link between corporate equity and debt.

Fed Balance Sheet Data. The Federal Reserve balance sheet is also a critical variable to include in any analysis of the state of our economy. The Fed balance sheet reflects purchases of assets by the Federal Reserve to boost the economy or otherwise take worthless assets off the books of companies. Most of the Fed’s balance sheet increase since 2008 has come in the form of Quantitative Easing, in various rounds. The Fed balance sheet increases since 2008 are now at $6 trillion, and counting, with no end in sight.

CONCLUSION

The list of variables to include in assessing general economic conditions, as part of assessing fair market value in bankruptcy, is obviously much larger than these few things. These are included in this discussion post as examples. Valuation experts can more carefully select the important variables relevant to a particular valuation, as required. The important point is this:

General economy influences on asset value are too important–too omnipresent–to be ignored by a valuation expert simply because they are difficult to calculate or predict. Disruptive events and market distortions of fair market value are a reality in our economy in 2020. We therefore cannot pretend they do not exist. Rather, we must do our best to account for them in order to arrive at fair market value that will actually mean something in the months and years following the valuation date.

We suggest that incorporating complexity theory into standard valuation models is a useful approach that will improve assessments of fair market value in bankruptcy proceedings, even in times of value distortion.

fair market value in bankruptcy