We're Doing It Wrong
Ed. Note: next week we will announce the Epsilon Theory Professional Service, where financial advisors and investment professionals can tap directly into our Narrative Machine market research and analysis. This week we’ll be highlighting specific investment applications we’ll be offering as part of that service, as well as the Big Picture value proposition. – Ben
We’re all explorers seeking to pierce the veil, hoping against hope for a vision of the music of the spheres beyond this messy world. Looking for an Answer in the clockwork machine that we all believe incorporates and underpins markets.
I think we’re doing it wrong.
I think investment professionals, quant and non-quant alike, are misusing the massive computing power that each and every one of us has at our fingertips. Whether it’s the powerful computer that we call a smartphone, whether it’s the crazy powerful multi-threaded computer that we call a laptop, whether it’s the insanely powerful computing utility we call AWS or Azure or the like … we’re using machine computing processes as an extension of our human computing processes.
This is a classic anthropomorphic fallacy.
Meaning that we can’t imagine what it would mean to use computers in some other, non-human way. Meaning that we never even consider whether there’s a human way of perceiving the world, much less what a non-human approach might be.
Here, I’ll give you an example.
In every how-do-we-use-computers-in-investing conversation I’ve ever had with anyone in tech … in every how-do-we-use-computers-in-investing conversation I’ve ever had with anyone in finance … in every how-do-we-use-computers-in-investing conversation I’ve ever had with MYSELF … the conversation, either implicitly or explicitly, is ALWAYS about using computers to find some hidden formula that will make us lots of money.
Always. Without exception. Ever.
This is a slide that ET contributor Neville Crawley made a while back, and it slays in meetings. It resonates. It sings.
Oh yeah, I see why we want this artificial intelligence system (I mean, I don’t know why you’re calling it Big Compute, but whatever).
It’s the next level. It’s the Giant Brain, replacing the Big Brain of all those computers that DE Shaw and Two Sigma and RenTech are using to figure out markets and mint money, which replaced the Little Brain of us humans scurrying around in the pits. AI is going to pierce through all the noise and find us the signal. It’s going to identify the pattern. It’s going to tell us the Answer.
Do you feel it? I feel it. It’s why I became a professional investor in the first place. To figure it out. To find those patterns and signals that would make me rich. To find the “tell” of markets.
But that’s not how AI works. That’s not how any of this works.
AI is not a giant brain, and there is no Answer to be found.
AI isn’t even a Difference Engine, as Charles Babbage called his programmable analytic device that we now call a computer. It’s more like a Connection Engine, able to “see” the similarities in a million-fold matrix all at once. It’s a non-human intelligence, more like an insect’s compound eye + nervous system than anything human-ish. And yes, there’s an oldie but goodie Epsilon Theory note for that.
As for the Answer …
The absence of an Answer – by which I mean the non-existence of a general closed-end solution or a predictive algorithm in any physical system of three or more interactive entities and certainly in any social system – is at the core of two canonical Epsilon Theory notes: The Three-Body Problem and Clear Eyes, Full Hearts, Can’t Lose. Honestly it’s the heart of the entire Things Fall Apart series of ET notes.
This message – that there is no predictive algorithm for social systems – bears repeating over and over, because the human brain is hard-wired to seek that algorithm. We literally cannot help ourselves. I believe it’s the root of every totalitarian impulse, large and small, that the human animal has ever experienced. That totalitarian impulse is most obvious – and most deadly – in our social system of politics, but it is no less present in our social system of markets.
We think of markets as a clockwork machine, as an intricate collection of gears upon gears. We believe that if only we examine the clockwork closely enough, we can identify some hidden gear or unbeknownst gear movement that will let us predict the clockwork’s movement and make a lot of money.
Our MODEL of markets is The Machine, and every Machine has a deterministic set of algorithms that create and drive it. Every Machine has an Answer.
This model – the market as machine – is an anthropomorphism.
There are lots of historical and anthropomorphic reasons why we think of social systems as machines. But they are all historical and anthropomorphic reasons. There’s nothing “natural” about it. And yes, there’s an Epsilon Theory note on that, too.
I’m sorry, Ray Dalio, but as a philosopher you’re a fantastic hedge fund manager.
To be clear, market-as-machine is a perfectly useful anthropomorphic model for most investment purposes, just as Ptolemy’s Earth-centric universe was a perfectly useful anthropomorphic model for most navigational purposes. Seriously, if your goal is to sail your ship from Tyre to Ostia, then you can’t do better than celestial navigation per Ptolemy. If you want to go to the moon, on the other hand …
Anthropomorphic models break when a revolutionary invention allows us to SEE the world in a non-human way.
For the Ptolemaic earth-as-center-of-the-universe model, that revolutionary invention was the telescope and the ability to see sunspots and Jupiter’s moons and all sorts of astronomical objects and phenomena that were, literally, previously invisible to the human eye.
AI is the revolutionary invention that breaks the market-as-machine model. It allows us to SEE narrative and sentiment and all sorts of social objects and phenomena that were, literally, previously invisible to the human eye.
To be sure, this new invention that lets us see in non-human ways isn’t a sufficient condition to break an anthropomorphic model. These models become so embedded in our social institutions and our minds that, as with Ptolemaic science and the invention of the telescope, it can take a hundred years and a lot of violence for a better model to be widely accepted.
And that’s the problem with AI for most investors, quant and non-quant alike.
If you use computers in your investment research process – and I know you do – I will bet you umpteen zillion dollars that you have those computers looking at structured historical data in an effort to find some repeating pattern. I will bet you do this rigorously and intentionally if you’re a quant. I will bet that you do this all the same, but non-rigorously and haphazardly if you’re not a quant.
Whether you realize it or not, you are using the market-as-machine model. You are looking for the Answer. Go on, you can admit it. You’re among friends here. I’m like Big Lou in the insurance ads … I’m one of you. It is embedded in our minds and in our businesses. Mine, too.
But here’s the thing.
If you use AI as just another input to that market-as-machine investment research process, you will get puzzling “results” that don’t help you very much. It will be just like using a telescope to get better measurements of the retrograde motion of Mars as it orbits around the Earth in your Ptolemaic model.
You will be disappointed by AI.
I want to suggest a different way to think about markets, a non-anthropomorphic model that works WITH the revolutionary invention of AI and Big Compute.
The market is not a clockwork machine.
The market is a bonfire.
We all know the physics of fire. The underlying rules of combustion are as clear and as deterministic as any pendulum or gear movement. Fire is not magic. Fire is not somehow separate from science or rigorous human examination. We know how to start fires. We know how to grow and diminish fires. We know how to put fires out. In a technical sense, Ray, you can classify fire as a machine.
But you’d never think that you could possess an algorithm that predicts the shape and form of a bonfire.
You’d never think that if only you stared at the fire long enough, and god knows humans have been staring at fires for tens of thousands of years, that somehow you’d divine some formula for predicting the shape of this or that lick of flame or the timing of this or that log collapsing in a burst of sparks.
No human can algorithmically PREDICT how a fire will burn. Neither can a computer. No matter how much computing power you throw at a bonfire, a general closed-end solution for a macro system like this simply does not exist.
But a really powerful computer can CALCULATE how a fire will burn. A really powerful computer can SIMULATE how a fire will burn. Not by looking for historical patterns in fire. Not by running econometric regressions. Not by figuring out the “secret formula” that “explains” a macro phenomenon like a bonfire. That’s the human way of seeing the world, and if you use your computing power to do more of that, you are wasting your time and your money. No, a really powerful computer can perceive the world differently. It can “see” every tiny piece of wood and every tiny volume of oxygen and every tiny erg of energy. It “knows” the rules for how wood and oxygen and heat interact. Most importantly – and most differently from humans – this really powerful computer can “see” all of these tiny pieces and “know” all of these tiny interactions at the same time. It can take a snapshot of ALL of this at time T and calculate what ALL of this looks like at time T+1, and then do that calculation again to figure out what ALL of this looks like at time T+2.
Want to guess who spends more money on Big Compute than everyone else in the world combined?
It’s the U.S. government, through the Dept. of Defense and the Dept. of Energy.
Know why they’ve spent BILLIONS of dollars on the world’s most advanced supercomputers?
To calculate fire.
Not just any old fire, of course, but nuclear fire. This is why the most advanced computers in the world today have been built – to simulate the explosion of nuclear weapons. To see the future by calculating the future, not by analyzing the past for predictive algorithms.
It’s a hard concept to wrap your head around, this distinction between calculating the future and predicting the future, but it’s the key to thinking about your investment research process in a non-anthropomorphic way. It’s the key to successfully and profitably incorporating the revolutionary invention of AI and Big Compute into your investment research process.
Now let’s be really clear … we’re a loooong way from performing the market equivalent of simulating H-Bomb explosions with the Narrative Machine. No, we’re more at the stage of taking a rudimentary telescope and aiming it at the sky. We have neither the ability to “see” market participants at a molecular level nor the ability to “know” the physical interaction rules of these participants at anywhere near the same precision or “resolution” that the DoD can see or know nuclear reactions.
But when we show you a Narrative map of Inflation like this, that’s the path we’re on.
We’re taking ALL of the thousands of financial media articles published over some period of time that mention “inflation”, and comparing every word and every phrase in every article to every other word and every other phrase in every other article. It’s a million-fold matrix that “sees” these publications and their inchoate arguments all at the same time and measures their connectedness and similarities all at the same time, then visualizes that connectedness in dimensions that make sense to a human eye and mind – color, distance, size, position, etc.
This is narrative-space, and it’s something that investors have always felt or believed existed, but we’ve never been able to SEE. Until now.
We can’t give you a secret formula for predicting markets from looking at narrative-space.
But we can tell you what IS in narrative-space.
What good is that?
- We think we know some of the “rules” for calculating what’s NEXT in market participant behaviors from what IS in narrative-space. This is the Common Knowledge Game, and it’s a forward-looking, actor-based way of evaluating the pathof markets.
- More importantly, we think that YOU already know many of the “rules” for calculating what’s next in whatever corner of the market is important to you. We think that experienced discretionary investors, traders, allocators and advisors have an enormous amount of internalized knowledge about the relationship between narrative-space and market participants in their arena of expertise. We think that a visualization of narrative-space can weaponize your internalized knowledge.
Tapping directly into the Narrative Machine is not for everyone.
If you’re looking for a new variable for your regression analysis, you don’t want this. If you’re looking for a new data feed that you can analyze and arb, you don’t want this. If you’re running a purely systematic or passive investment strategy, you don’t want this.
But if any aspect of your investing or your portfolio allocation is still human … if any aspect of your investing or your portfolio allocation is still discretionary … we think you’ll find the Narrative Machine research project worth a look.
Not because we can give you an Answer.
But because we can advance your Process.
Come see what we're doing on Epsilon Theory. We're looking at markets and politics through the lenses of game theory, history and narrative.
Follow me @EpsilonTheory or connect with me on LinkedIn.
HubSpot Gamification: Motivating Sales Teams To Love Their CRM 🏆 Building High Performing Sales Cultures with $10-50M Projects Teams 🚀 Sales Psychology Geek 🧠 Co-Director at Why Bravo 🎯 Host of #SoundsHuman 🎙
6ySteve Claydon
Sceptical Empiricist.
6yBehavioural science works all the time humans are trading. Predictive analytics works all the time machines are trading. Economics is the alchemical search for the philosophers stone. This article is a patchwork quilt of borrowed ideas stitched together in a ponderous narrative that takes itself far too seriously.
CHIEF FINANCIAL OFFICER at NANO BIOMED INC
6yGreat