Tuesday, December 20, 2016

sk8

x-sports
Skateboarding is hard.

Really hard.

It is one of those pursuits that allows humans to demonstrate insane skills.

Like:
Untethered high-lining (Dean Potter)


Proximity wingsuit flying (Halvor Angvik)


Triple corks on skis (Bobby Brown)


Quad corks on snowboards (Billy Morgan)


Big wave surfing (Garret McNamara)


Jumping into water (Laso Schaller, Dana Kunze)


Free soloing (Alex Honnold)


Skateboarding is very technical. The simplest trick is an Ollie. You basically just pop the board into the air using your feet. This is the basis for all street tricks. Unfortunately, mastering an Ollie takes a long time.

First you need to be comfortable riding your board. If you find balancing hard, try riding tight trucks. Once you get the hang of it, loosen them and enjoy the added sense of maneuverability. Now you can start to practice Ollies while stationary. Then while riding. Yes, you need to be patient.

Now, I thought that decades of snowboarding would help. Well, they do a little bit. Still took me some time to feel safe cruising. As a snowboarder, the unfamiliar freedom of being able to place your feet anywhere on the skateboard needed some time to understand (but this then turned out to really help my surfing!).

On a snowboard, I can pretty much do everything switch, also at high speeds. However, riding switch or fakie on a skateboard feels super uncomfortable. Yes, patience.

So, just to get the basics down takes ages. You can be skateboarding for quite some time and everything you do looks easy and unspectacular to bystanders.

Now imagine the level of skill you need to be able to do this:


Or this:


Or this:


Or this:


Or this:


Skateboarding is also hard because it is done on concrete or asphalt, with edges and corners looming everywhere. It is really intimidating to know that if you have to bail, you won't be landing in water or on snow.

For some insane reason there is an unwritten law, that dictates that street-style skateboarding is best executed without any kind of protection:

And don't search for "skateboard slam sessions" on YouTube.

So, why bother? Simple, it is just so much fun! Even the beginner parts;)

It is about a philosophy, an outlook on life.  A simple feeling, a state of being. Watch skate legend Rodney Mullen's TED talk. Or watch him perform at the age of 50:


Or listen to this geezer:


Or watch Tony Hawk reenacting his 900 at the age of 48:


And then there is vert and big-air skateboarding next to street...


For me, at 44, I just want to be able to cruise around town (meaning I need to be able to Ollie up a curb at some point) and learn to ride a bowl/mini ramp (i.e., frozen wave). All this while not getting hurt. The idea is to bridge snowboarding and surfing with skateboarding. It's about developing a new kind of intelligence in my feet and legs.

Still a long way to go:


My board has a wide deck, wide trucks, and soft wheels...

Monday, December 19, 2016

the center of the universe

information is physical
Just a random thought:

Earth is literally the center of the universe in terms of information processing.


Or at least, as far as we are aware of.

Because:

In the beginning, information processing manifested itself on molecular basis utilizing self-replicators a few billion years ago, as the emergent process called life unleashed itself.

Then, not too long ago, the neural networks assembled by the process of life became conscious and self-aware.

And currently, as these sentient beings bring information processing to a non-organic platform: binary computation.

Finally, as the cognizant beings push the limits of information processing into the quantum realm, employing the technology that emerged from their own information processing capabilities. This unlocks unprecedented new levels of computation.

--

Edit: made a meme out of this:




Thursday, December 1, 2016

What is Real?

am I even real?
Met the wonderful Lucy Hawking at TEDxSalford by chance (Science and Storytelling, The Consciousness of Reality). This led to an amazing opportunity allowing me to contribute a science essay to her newest children's book:

George and the Blue Moon
Lucy and Stephen Hawking
Penguin, 2016

Staying true to my little hobby, it was called:

What is Reality?

And it started like this:
Every day you wake up. Returning from the wonderful adventures you may have been having in your dreams, you become you again. The memories of who you are and what you have been up to in your life come back. And you also realize that there is a world that lies outside of yourself, simply called reality. Then you get up.

This all seems very ordinary and not very exciting. However, all of this is linked to the hardest question that humans have ever asked themselves: What exactly is reality? What is this thing, made up of space, time and objects, we live in?

And ended like this:
But for the moment we can comfort ourselves with two answers to the question, ‘What is reality?’

One is that reality is a much bigger, richer and more complex thing than we ever dared to dream.

Or a short answer could be, ‘I create my reality!’

Thursday, November 24, 2016

creativity

reggie watts is a genius
Some people are just insanely creative. We already met Beardyman. Reggie Watts is equipped with a similar set of skills. His web-page labels him as vocal artist, beatboxer, musician, and comedian. His performances are random and often improvised sequences of live looping and colorful vocal outbursts, comprised of sounds and noises and narrating in various accents and (mock) languages.


To appreciate how his talent affects people, one can read the remarkable compliments in the comment section of the YouTube videos of his performances. In a refreshing, albeit rare, contrast to the hatred and animosity mostly encountered there:
  • "Reggie Watts is my favorite human."
  • "He really is light years ahead of the rest of us humans. I love him. He is a genius."
  • "What planet did he come from? We don't deserve this kind of being. We're not worthy."
  • "This man is light-years ahead of his time."
  • "Absolutely no clue what was happening but brilliant"
  • "What a fucking genius..."
  • "I hereby nominate Reggie Watts to be Ambassador of Earth, and be first to make contact with aliens should they visit."
  • "Reggie is at the end of my rainbow!"
  • "I don't even laugh when watching Reggie anymore. I just admire him."



One reoccurring theme is Watts speaking with a British accent reminiscent of a university professor, talking abstract nonsense. Or is it?
"And the important thing to remember is that this simulation is a good one. It's believable, it's tactile. You can reach out -- things are solid. You can move objects from one area to another. You can feel your body. You can say, 'I'd like to go over to this location,' and you can move this mass of molecules through the air over to another location, at will."


"Now, we know that everything here is an illusion and that we are somewhere else. But the cool thing about that is, it feels pretty real. I mean -- you know what I mean? Like, it's pretty convincing. So, big credit to those people working hard there."


Why should you consider reality to be a simulation/illusion? Well...


some stuff

on being idiosyncratic
next to my love of science (see all the boring stuff;)




I really enjoy

  • snowboarding (nearly 30 years), climbing (23 years), surfing (20 years plus), and skateboarding (1/2 year)



  • traveling

  • electronic music and related parties/festivals





and then (in random order)


I constantly have to wonder about the existence of my own mind, the conscious experience it gives me of an external reality, and what this all could possibly mean (yeah, book project). 


I also like to be highly critical of the socio-cultural environment I was born into and from there move on to being critical of other ones (rant and rant). I am highly skeptical of our financial systems (faults and greed).


I like to question myself and my ideas/beliefs.


I try to put myself into other people's shoes, as I believe I would be that same person, given the same biography and brain chemistry/hard-wiring.


I am an irrational optimist. although I see, in my opinion, so many things that are so terribly and depressingly wrong all over the world, I try to keep my faith (this here).


I get inspired by a spiritual outlook on life that seeks happiness and wisdom within oneself and allows for the existence of other realms of "reality" outside space and time (e.g., Buddhism and certain esoteric ideas). I totally and fundamentally reject institutionalized theologies. does the term "spiritual atheism" make any sense?


I had been vegetarian for 12 years before turning vegan (as best as I can) 4 yeas ago. why? environmental, ethical, and health considerations (once I get around to it, this will be a long and heavily referenced piece).


I aim at remaining grateful for experiencing this stream of consciousness, regardless of its contents.


I try to resist the urge to be cynical as fuck as much as I can (e.g., while interacting with crackpots in news groups or discussing climate change).


I am deeply thankful to all the loved ones in my life, especially my wife, who make this journey so much more fun <3




Monday, May 30, 2016

swimming in the sea of knowledge

we live in truly interesting times
We take one of the most amazing and far-reaching achievements in recent times for granted: free access to knowledge.

The advent of user-generated content, the so-called Web 2.0, has enabled initiatives like Wikipedia to assemble an unfathomable amount of human knowledge --- at your fingertips. The Google Books Project has scanned and digitalized millions of books making them searchable on-line.

Google Scholar is a search engine accessing countless published scholarly articles. Many publications nowadays are open access and often working papers or preprints are available (like arxiv.org, biorxiv.org, ssrn.com). If this isn't enough, "Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it" (src). Obviously, this results in a cat-and-mouse game:
  • http://sci-hub.io/
  • http://sci-hub.bz/
  • ...
  • TOR scihub22266oqcxt.onion
But access alone is not enough. The sheer amount of information is mind-blowing. So, how can one navigate this see of knowledge without drowning?

Enter YouTube, respectively its content providers. There exist a multitude of channels featuring videos aimed at explaining countless topics from science to philosophy. But crucially, this is done in an entertaining and/or visually appealing manner. Some of my favorites are: Kurzgesagt – In a Nutshel, CrashCourse, Vsauce, Veritassium, MinutePhysics or one of the channels of Brady Haran (list).

And, last but not least, TED and TEDx talks entertain "ideas worth spreading". In other words, personal insights from people working at the cutting edge of current knowledge or simply talks packed with inspiration.

This all means that you have a nearly inexhaustible treasure trove of knowledge at your free disposal, broken down into piecemeal units, ready for instant education.

Enjoy:)











--
Edit: Some of my Youtube playlists:




Thursday, May 26, 2016

more random quotes: scott aaronson

new perspectives
So, John Horgan, the End of Science guy, interviewed Scott Aaronson, a theoretical computer scientist interested in quantum computing and computational complexity theory.

In the following, some random quotes.

On Quantum Mechanics

    [Q]uantum mechanics is astonishingly simple—once you take the physics out of it!  In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.

    [A]ccepting quantum mechanics didn’t mean giving up on the computational worldview: it meant upgrading it, making it richer than before.  There was a programming language fundamentally stronger than BASIC, or Pascal, or C—at least with regard to what it let you compute in reasonable amounts of time.  And yet this quantum language had clear rules of its own; there were things that not even it let you do (and one could prove that); it still wasn’t anything-goes. 


The Computational Universe

    If it’s worthwhile to build the LHC or LIGO—wonderful machines that so far, have mostly triumphantly confirmed our existing theories—then it seems at least as worthwhile to build a scalable quantum computer, and thereby prove that our universe really does have this immense computational power beneath the surface. 

    Firstly, quantum computing has supplied probably the clearest language ever invented—namely, the language of qubits, quantum circuits, and so on—for talking about quantum mechanics itself.
[...]
Secondly, one of the most important things we’ve learned about quantum gravity—which emerged from the work of Stephen Hawking and the late Jacob Bekenstein in the 1970s—is that in quantum gravity, unlike in any previous physical theory, the total number of bits (or actually qubits) that can be stored in a bounded region of space is finite rather than infinite.  In fact, a black hole is the densest hard disk allowed by the laws of physics, and it stores a “mere” 1069 qubits per square meter of its event horizon!  And because of the dark energy (the thing, discovered in 1998, that’s pushing the galaxies apart at an exponential rate), the number of qubits that can be stored in our entire observable universe appears to be at most about 10122.
[...]
So, that immediately suggests a picture of the universe, at the Planck scale of 10^-33 meters or 10^-43 seconds, as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation. 

The Big Picture

    Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible.  Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory. 

    [S]ome of the conceptual problems of quantum gravity turn out to involve my own field of computational complexity in a surprisingly nontrivial way.  The connection was first made in 2013, in a remarkable paper by Daniel Harlow and Patrick Hayden.  Harlow and Hayden were addressing the so-called “firewall paradox,” which had lit the theoretical physics world on fire (har, har) over the previous year.

    In summary, I predict that ideas from quantum information and computation will be helpful—and possibly even essential—for continued progress on the conceptual puzzles of quantum gravity. 


    If civilization lasts long enough, then there’s absolutely no reason why there couldn’t be further discoveries about the natural world as fundamental as relativity or evolution. One possible example would be an experimentally-confirmed theory of a discrete structure underlying space and time, which the black-hole entropy gives us some reason to suspect is there. 

P/NP

    [T]he ocean of mathematical understanding just keeps monotonically rising, and we’ve seen it reach peaks like Fermat’s Last Theorem that had once been synonyms for hopelessness.  I see absolutely no reason why the same ocean can’t someday swallow P vs. NP, provided our civilization lasts long enough.  In fact, whether our civilization will last long enough is by far my biggest uncertainty. 

    More seriously, it was realized in the 1970s that techniques borrowed from mathematical logic—the ones that Gödel and Turing wielded to such great effect in the 1930s—can’t possibly work, by themselves, to resolve P vs. NP.  Then, in the 1980s, there were some spectacular successes, using techniques from combinatorics, to prove limitations on restricted types of algorithms.  Some experts felt that a proof of P≠NP was right around the corner.  But in the 1990s, Alexander Razborov and Steven Rudich discovered something mind-blowing: that the combinatorial techniques from the 1980s, if pushed just slightly further, would start “biting themselves in the rear end,” and would prove NP problems to be easier at the same time they were proving them to be harder!  Since it’s no good to have a proof that also proves the opposite of what it set out to prove, new ideas were again needed to break the impasse. 


Musings

    This characteristic of quantum mechanics—the way it stakes out an “intermediate zone,” where (for example) n qubits are stronger than n classical bits, but weaker than 2n classical bits, and where entanglement is stronger than classical correlation, but weaker than classical communication—is so weird and subtle that no science-fiction writer would have had the imagination to invent it.  But to me, that’s what makes quantum information interesting: that this isn’t a resource that fits our pre-existing categories, that we need to approach it as a genuinely new thing. 

    [I]f scanning my brain state, duplicating it like computer software, etc. were somehow shown to be fundamentally impossible, then I don’t know what more science could possibly say in favor of “free will being real”!


    I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.

    Just in the fields that I know something about, NP-completeness, public-key cryptography, Shor’s algorithm, the dark energy, the Hawking-Bekenstein entropy of black holes, and holographic dualities are six examples of fundamental discoveries from the 1970s to the 1990s that seem able to hold their heads high against almost anything discovered earlier (if not quite relativity or evolution).

Wednesday, February 17, 2016

Decoding Financial Networks: Hidden Dangers and Effective Policies 


Two changes have ushered in a new era of analyzing the complex and interdependent world surrounding us. One is related to the increased influx of data, furnishing the raw material for this revolution that is now starting to impact economic thinking. The second change is due to a subtler reason: a paradigm shift in the analysis of complex systems.

The buzzword "big data" is slowly being replaced by what is becoming established as "data science." While the cost of computer storage is continually falling, storage capacity is increasing at an exponential rate. In effect, seemingly endless streams of data, originating from countless human endeavors, are continually flowing along global information superhighways and being stored not only in server farms and the cloud, but -- importantly -- also in the researcher's local databases. However, collecting and storing raw data is futile if there is no way to extract meaningful information from it. Here, the budding science of complex systems is helping distill meaning from this data deluge.

Traditional problem-solving has been strongly shaped by the success of the reductionist approach taken in science. Put in the simplest terms, the focus has traditionally been on things in isolation -- on the tangible, the tractable, the malleable. But not so long ago, this focus shifted to a subtler dimension of our reality, where the isolation is overcome. Indeed, seemingly single and independent entities are always components of larger units of organization and hence influence each other. Our world, while still being comprised of many of the same "things" as in the past, has become highly networked and interdependent -- and, therefore, much more complex. From the interaction of independent entities, the notion of a system has emerged.

Understanding the structure of a system's components does not bring insights into how the system will behave as a whole. Indeed, the very concept of emergence fundamentally challenges our knowledge of complex systems, as self-organization allows for novel properties -- features not previously observed in the system or its components -- to unfold. The whole is literally more than the sum of its parts.

This shift away from analyzing the structure of "things" to analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics and sociology. The need to bring about such a shift in economics, too, can be heard in the words of Andy Haldane, chief economist at the Bank of England (Haldane 2011):
Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behavior of the system was just an aggregated version of the behavior of the individual. Almost by definition, complex systems do not behave like this. [...] Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behavior of any one node.

In a nutshell, the key to the success of complexity science lies in ignoring the complexity of the components while quantifying the structure of interactions. An ideal abstract representation of a complex system is given by a graph -- a complex network. This field has been emerging in a modern form since about the turn of the millennium (Watts and Strogatz 1998; Barabasi and Albert 1999; Albert and Barabasi 2002; Newman 2003).

Underpinning economics with insights from complex systems requires a major culture change in how economics is conducted. Specialized knowledge needs to be augmented with a diversity of expertise. Or, in the words of Jean-Claude Trichet, former president of the European Central Bank (Trichet 2010):

I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analyzing complex dynamic systems in a rigorous way.

What's more, scientists themselves have acknowledged this call for action (see, e.g., Schweitzer et al. 2009; Farmer et al. 2012).

In what follows, I will present two case studies that provide an initial glimpse of the potential of applying such a data-driven and network-inspired type of research to economic systems. By uncovering patterns of organization otherwise hidden in the data, these studies caught the attention not only of scholars and the general public, but also of policymakers.

The network of global corporate control

A specific constraint related to the analysis of economic and financial systems lies in an unfortunate relative lack of data. While other fields are flooded with data, in the realm of economics, a lot of potentially valuable information is deemed proprietary and not disclosed for strategic reasons. A viable detour is utilizing a good proxy that is exhaustive and widely available.

Ownership data, representing the percentages of equity a shareholder has in certain companies, is such a dataset. The structure of the ownership network is thought to be a good proxy for that of the financial network (Vitali, Glattfelder and Battiston 2011). However, this is not the main reason for analyzing such a dataset. Ownership networks represent an interface between the fields of economics and complex networks because information on ownership relations crucially unlocks knowledge relating to the global power of corporations. As a matter of fact, ownership gives a certain degree of control to the shareholder. In other words, the signature of corporate control is encoded in these networks (Glattfelder 2013). These and similar issues are also investigated in the field of corporate governance.

Bureau van Dijk's commercial Orbis database comprises about 37 million economic actors (e.g., physical persons, governments, foundations and firms) located in 194 countries as well as roughly 13 million directed and weighted ownership links for the year 2007. In a first step, a cross-country analysis of this ownership snapshot was performed (Glattfelder and Battiston 2009). A key finding was that the more control was locally dispersed, the higher the global concentration of control lay in the hands of a few powerful shareholders. This is in contrast to the economic idea of "widely held" firms in the United States (Berle and Means 1932). In fact, these results show that the true picture can only be unveiled by considering the whole network of interdependence. By simply focusing on the first level of ownership, one is misled by a mirage.

In a next step, the Orbis data was used to construct the global network of ownership. By focusing on the 43,060 transnational corporations (TNCs) found in the data, a new network was constructed that comprised all the direct and indirect shareholders and subsidiaries of the TNCs. Then, this network of TNCs, containing 600,508 nodes and 1,006,987 links, was further analyzed (Vitali, Glattfelder and Battiston 2011). Figure 1 shows a small sample of the network.

Analyzing the topology of the TNC network reveals the first signs of an organizational principle at work. One can see that the network is actually made up of many interconnected sub-networks that are not connected among themselves. The cumulative distribution function of the size of these connected components follows a power law, as there are 23,824 such components varying in size from many single isolated nodes to a cluster of 230 connected nodes. However, the largest connected component (LCC) represents an outlier in the powerlaw distribution, as it contains 464,006 nodes and 889,601 links.

This super-cluster contains only 36 percent of all TNCs. In effect, most TNCs "prefer" to be part of isolated components that comprise a few hundred nodes at most. But what can be said about the TNCs in the LCC? By adding a proxy for the value or size of firms, the network analysis can be extended. In the study, the operating revenue was used for the value of firms. Now it is possible to see where the valuable TNCs are located in the network. Strikingly, the 36 percent of TNCs in the LCC account for 94 percent of the total TNC operating revenue. This finding justifies focusing further analysis solely on the LCC.

In general, assigning a value v_j to firm j gives additional meaning to the ownership network. As mentioned, a good proxy reflecting the economic value of a company is the operating revenue. Assigning such a non-topological variable to the nodes uncovers a deeper level of information embedded in the network. If shareholder i holds a fraction W_{ij} of the shares of firm j, W_{ij} v_j represents the value that i holds in j. Accordingly, the portfolio value of firm i is given by
p_i = sum_j W_{ij} v_j, (1.1)
However, in ownership networks, there are also chains of indirect ownership 80 links. For instance, firm i can gain value from firm k via firm j, if i holds shares in j, which, in turn, holds shares in k. Symbolically, this can be denoted as i -> j -> k.

Using these building blocks, and the fact that ownership is related to control, a methodology is introduced that estimates the degree of influence that each agent wields as a result of the network of ownership relations. In other words, a network centrality measure is provided that not only accounts for the structure of the shareholding relations, but -- crucially -- also incorporates the distribution of value. This allows for the top shareholders to be identified. As it turns out, 730 top shareholders have the potential to control 80 percent of the total operating revenue of all TNCs. In effect, this measure of influence is one order of magnitude more concentrated than the distribution of operating revenue. These top shareholders are comprised of financial institutions located in the United States and the United Kingdom (note that holding many ownership links does not necessarily result in a high value of influence).

Combining these two dimensions of analysis -- that is, the topology and the shareholder ranking -- finally uncovers yet another pattern of organization. A striking feature of the LCC is that it has a tiny but distinct core of 1,318 nodes that are highly interconnected (12,191 links). Analyzing the identity of the firms present in this core reveals that many of them are also top shareholders. Indeed, the 147 most influential shareholders in the core can potentially control 38 percent of the total operating revenue of all TNCs. In other words, a "superentity" with disproportional power is identified in the already powerful core, akin to a fractal structure.

This emerging power structure in the global ownership network has possible negative implications. For instance, as will be discussed in the next section, global systemic risk is sensitive to the connectivity of the network (Battiston et al. 2007; Lorenz and Battiston 2008; Wagner 2009; Stiglitz 2010; Battiston et al. 2012a). Moreover, global market competition is threatened by potential collusion (O'Brien and Salop 2001; Gilo, Moshe and Spiegel 2006).

Subjecting a comprehensive global economic dataset to a detailed network analysis has the power to unveil organizational patterns that have previously gone undetected. Although the exact numbers in the study should be taken with a grain of salt, they still give a good first approximation. For instance, the very different methods that can be used to estimate control from ownership all provide very similar aggregated network statistics.

Finally, although it cannot be proved that the top influencers actually exert their power or are able to leverage their privileged position, it is also impossible to rule out such activities -- especially since these channels for relaying power can be utilized in a covert manner. In any case, the degree of influence assigned to the shareholders can be understood as the probability of achieving one's own interest against the opposition of the other actors -- a notion reminiscent of Max Weber's idea of potential power (Weber 1978).

An ongoing research effort aims to extend this analysis to include additional annual snapshots of the global ownership network up to 2012. The focus now lies on the dynamics and evolution of the network. In particular, the stability of the core over time will be analyzed. Preliminary results on a small subset of the data suggest that the structure of the core is indeed stable. If verified, this would imply that the emergent power structure is resilient to forces reshaping the network architecture, such as the global financial crisis. The structure could also potentially be resistant to market reforms and regulatory efforts.

DebtRank

In an interconnected system, the notion of risk can assume many guises. The simplest and most obvious manifestation is that of individual risk. The colloquialism "too big to fail" captures the promise that further disaster can be averted by identifying and assisting the major players. This approach, however, does not work in a network. In systems where the agents are connected and therefore codependent, the relevant measure is systemic risk. Only by understanding the architecture of the network's connectivity can the propagation of financial distress through the system be understood. In essence, systemic risk is akin to the process of an epidemic spreading through a population.

A naive intuition would suggest that by increasing the interconnectivity of the system, the threat of systemic risk is reduced. In other words, the overall system should be more resilient when agents diversify their individual risks by increasing the shared links with other agents. Unfortunately, this can be shown to be false (Battiston et al. 2012a). Granted, in systems with feedback loops, such as financial systems, initial individual risk diversification can indeed start off by reducing systemic risk. However, there is a threshold related to the level of connectivity, and once it has been reached, any additional diversification effort will only result in increased systemic risk. Above this certain value, feedback loops and amplifications can lead to a knife-edge property, in which case stability is suddenly compromised.

Now a paradox emerges: Although individual financial agents become more resistant to shocks coming from their own business, the overall probability of failure in the system increases. In the worst-case scenario, the efforts of individual agents to manage their own risk increase the chances that other agents in the system will experience distress, thereby creating more systemic risk than the risk they reduced via risk-sharing. Against this backdrop, the highly interconnected core of the global ownership network looms ominously.

To summarize, in the presence of a network, it is not enough to simply identify the big players that have the potential to damage the system should they experience financial distress. Instead, it is crucial to analyze the network of codependency. The phrase "too connected to fail" captures this focus. However, for this approach to be implemented, a full-blown network analysis is required. Insights can only be gained by simulating the dynamics of such a system on its underlying network structure. For instance, one cannot calculate analytically the threshold of connectivity past which diversification has a destabilizing effect.

Still, there is a final step that can be taken in analyzing systemic risk in networks. Next to "too big to fail" (which focuses on the nodes) and "too connected to fail" (which incorporates the links), a third layer can be added by utilizing a more sophisticated network measure called "centrality." In a nutshell, a node's centrality simply depends on its neighbors' centrality. For example, PageRank, the algorithm that Google uses to rank websites in its search-engine results, is a centrality measure. A webpage is more important if other important webpages link to it. Recall also that the methodology for computing the degree of influence that was discussed in the previous section is another example of centrality.

A study focusing on this "too central to fail" notion of systemic risk has been conducted (Battiston et al. 2012b). The work employed previously confidential data on the 2008 crisis gathered by the US Federal Reserve to assess systemic risk as part of the Fed's emergency loans program. Inspired by the methodology behind the computation of shareholder influence and PageRank, a novel centrality measure for tracking systemic risk, called DebtRank, is introduced.

In the study, debt data from the Fed is augmented with the ownership data used in the analysis of the network of global corporate control. As mentioned, the ownership network is a valid proxy for the undisclosed financial network linking banks. The data also includes detailed information on daily balance sheets for 407 institutions that, together, received bailout funds worth $1.2 trillion from the Fed. The data covers 1,000 days from before, during and after the peak of the crisis, from August 2007 to June 2010. The study focuses on the 22 banks that collectively received three-quarters of that bailout money. It is interesting to observe that almost all of these banks were members of the "super-entity."

DebtRank computes the likelihood that a bank will default as well as how much this would damage the creditworthiness of the other banks in the network. In essence, the measure extends the notion of default contagion into that of distress propagation. Crucially, Debt- Rank proposes a quantitative method for monitoring institutions in a network and identifying the ones that are the most important for the stability of the system.

Figure 2 shows an "X-ray image" of the global financial crisis unfolding. It is striking to observe how many of the major players are affected and how some individual institutions threaten the majority of the economic value in the network (a DebtRank value larger than 0.5). Indeed, if a bank with a DebtRank value close to one defaults, it could potentially obliterate the economic value of the entire system. And, finally, the issue of "too central to fail" becomes dauntingly visible: Even institutions with relatively small asset size can become fragile and threaten a large part of the economy. The condition for this to happen is given by the position in the network as measured by the centrality.

In a forthcoming publication (Battiston et al. 2015), the notion of DebtRank is re-expressed making use of the more common notion of leverage, defined as the ratio between an institution's assets and equity. From this starting point, the authors develop a stress-test framework that allows the computation of a whole set of systemic risk measures. Again, since detailed data on the bilateral exposures between financial institutions is not publicly available, the true architecture of the financial network cannot be observed. In order to overcome this problem, the framework utilizes Monte Carlo samples of networks with realistic topologies (i.e., network realizations that match the aggregate level of interbank exposure for each financial institution).

As an illustrative exercise, the authors run the framework on a set of European banks, with empirical data comprising the aggregated interbank lending and borrowing volumes having been obtained from Bankscope, which covers 183 EU banks. The interbank network is reconstructed for the years 2008 to 2013 using the so-called fitness model. Importantly, the attention is placed not only on first-round effects of an initial shock, but also on the subsequent additional rounds of reverberations within the interbank network. A crucial result is given by the following relation:
L(2) = l^b S, (1.2)
where L(2) represents the total relative equity loss of the second round of distress propagation induced by the initial shock S, and with l^b > 0 being the weighted average of the interbank leverage. In other words, l^b is derived from the interbank assets and equity. In detail, S is computed from the unit shock on the value of external assets and the external leverage, that is, from the leverage related to the assets that do not originate from within the interbanking system.

Equation (1.2) implies the highly undesirable conclusion that the second-round effect of distress propagation is also at least as detrimental as the initial shock. This result highlights the important fact that waves of financial distress ripple multiple times through the network in a way that intensifies the problem for the individual nodes. This mechanism only truly becomes visible in a network analysis of the system. In empirical terms, this result is also compelling, as levels of interbank leverage are often around a value of two. In this light, the distress in the second round can be twice as big as the initial distress on the external assets. To conclude, neglecting second-round effects could therefore lead to a severe underestimation of systemic risk.

Outlook for policy-making

What is the added value of trying to understand the economy as an interconnected complex system? The most important result to mention in this context is the power of such analysis to uncover hidden features that would otherwise go undetected. Stated simply, the intractable complexity of financial systems can be decoded and understood by unraveling the underlying network.

A prime example of a network analysis uncovering unsuspected latent features is the detection of the tiny, but highly interconnected core of powerful actors in the global ownership network. It is a novel finding that the most influential companies do not conduct their business in isolation, but rather are entangled in an extremely intricate web of control. Notice, however, that the very existence of such a small, powerful and self-controlled group of financial institutions was unsuspected in the economics literature. Indeed, its existence is in stark contrast with many theories on corporate governance (see, e.g., Dore 2002).

However, understanding the structure of interaction in a complex system is only the first step. Once the underlying network architecture is made visible, the resulting dynamics of such systems can be analyzed. Recall that distress spreads through the network like an epidemic, infecting one node after another. In other words, the true understanding of the notion of systemic risk in a financial setting crucially relies on the knowledge of this propagation mechanism, which again is determined by the network topology. As discussed above, in a real-world setting in which feedback loops can act as amplifiers, the second-round effect of an initial shock is also at least as big as the initial impact. It should be noted that the notorious "bank stress tests" also aim at assessing such risks. More specifically, it is analyzed whether, under unfavorable economic scenarios, banks have enough capital to withstand the impact of adverse developments. Unfortunately, while commendable, these efforts only emphasize first-round effects and therefore potentially underestimate the true dangers to a significant degree. A recent example is the Comprehensive Assessment conducted by the European Central Bank in 2014, which included the Asset Quality Review.

A first obvious application of the knowledge derived from a complex-systems approach to finance and economics is related to monitoring the health of the system. For instance, DebtRank allows systemic risk to be measured along two dimensions: the potential impact of an institution on the whole system as well as the vulnerability of an institution exposed to the distress of others. This identifies the most dangerous culprits, namely, institutions with both high vulnerability and impact. In Figure 3, the whole extent of the financial crisis becomes apparent, as high vulnerability was indeed compounded with high impact in 2008. In 2013, high vulnerability was offset by relatively low impact.

In addition to analyzing the health of the financial system at the level of individual actors, an index could be constructed that incorporates and aggregates the many facets of systemic risk. In this case, sectors and countries could also be scrutinized. A final goal would be the implementation of forecasting techniques. What probable trajectories leading into crisis emerge from the current state of the system? As Haldane (2011) noted in contemplating the idea of forecasting economic turbulences:

It would allow regulators to issue the equivalent of weather-warnings -- storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued -- keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable "what-if?" simulations to be run -- if UK bank Northern Rock is the first domino, what will be the next?

In essence, a data- and complex systems-driven approach to finance and economics has the power to comprehensively assess the true state of the system. This offers crucial information to policymakers. By shedding light on previously invisible vulnerabilities inherent in our interconnected economic world, the blindfolds of ignorance can be removed, paving the way to policies that effectively mitigate systemic risk and avert future global crises.


References and Figures










 —  —  — 

This was a chapter contribution to “To the Man with a Hammer: Augmenting the Policymaker’s Toolbox for a Complex World”, Bertelsmann Stiftung, 2016:
This article collection helps point the way forward. Gathering a distinguished panel of complexity experts and policy innovators, it provides concrete examples of promising insights and tools, drawing from complexity science, the digital revolution and interdisciplinary approaches.

Table of contents:



 —  —  — 

See also "Ökonomie neu denken", February 16, 2016, Frankfurt am Main and Podiumsdiskussion.