Tarun Chitra, cofounder and CEO of Gauntlet, describes how his company enables crypto teams to run through simulations to see how their design choices will affect the project once it is trading. Using the cofounders’ background in quantitative finance and high-frequency trading, Gauntlet uses tools from behavioral economics as well as game theory, plus real-world information such as exchange data to model outcomes after 100,000 blocks. We discuss how it makes its assumptions, how proof of stake differs from proof of work, and how the initial token distribution can affect the eventual concentration of tokens. He also reveals what design choice tends to have the greatest impact, as well as what main factor crypto teams aren’t thinking about that they should be. Plus, he talks about his involvement in the launch of Facebook’s Libra project, and gives us his thoughts on the design choices Facebook made regarding the consensus algorithm, programming language and the structure of the coin.
See the full show notes on Forbes! http://www.forbes.com/sites/
Take the Unchained Podcast survey!
Help make Unchained better! Take our survey and enter the giveaway for a free Bitcoin lightning node and a yearlong Casa Gold membership, — including a multisig security app for iPhone and Android, a Trezor hardware wallet, a Casa faraday bag, and 24/7 support! https://www.surveymonkey.com/
Thank you to our sponsors!
Tarun Chitra: https://twitter.com/
Gauntlet blog posts: https://medium.com/
Unchained episode with Olaf Carlson-Wee of Polychain from Consensus 2019: https://
Zero Knowledge podcast episode with Tarun Chitra: https://www.
Hi, everyone. Welcome to Unchained, your no-hype resource for all things crypto. I’m your host, Laura Shin. You may have heard, Unchained is doing a survey. We want to know, how do you think we can make the show better? How would you like to see Unchained expand? If you could just take a moment and go to https://www.surveymonkey.com/r/unchainedsurvey2019
Your answers will be a huge help to me and my team here at Unchained. Also those who answer the survey can enter to win one of five free Casa bitcoin lightning nodes plus a free year of Casa’s gold membership, including a Multisec security app for iPhone and Android, a Trezor hardware wallet, a Casa Faraday Bag, and 24/7 support. Those of you interested in learning more about Casa or about protecting your bitcoin investment generally, you should check out my interview with CEO Jeremy Welch.
Thank you to Casa for donating. Again, the URL is https://www.surveymonkey.com/r/unchainedsurvey2019. Go there now to give us your thoughts on the future direction of Unchained and enter the giveaway.
CipherTrace makes it easy for exchanges and crypto businesses to comply with cryptocurrency anti-money laundering laws. Avoid illegal sources of funds, and maintain healthy banking relationships. Ciphertrace is helping you grow the crypto economy by keeping it safe and secure.
Kraken is the best exchange in the world for buying and selling digital assets. It has the tightest security, deep liquidity and a great fee structure with no minimum or hidden fees. Whether you’re looking for a simple fiat onramp, or leveraged options trading, Kraken is the place for you.
My guest today is Tarun Chitra, cofounder and CEO of Gauntlet. Welcome, Tarun.
So tell us what Gauntlet does.
Yeah, so Gauntlet is a blockchain simulation platform. So we basically try to model all the different types of users who interact with both layer 1 chains and consensus protocols as well as smart contracts, and one of the reasons for really doing this is to have a strong statistical understanding in addition to, you know, security understanding of how these systems perform when different types of users are interacting with these systems.
A simple kind of example is, you know, in a lot of crypto protocols, people tend to compare byzantine and really kind of irrational actors versus honest actors who perfectly follow a protocol, but in reality, most users of these protocols are traders or interacting with other systems and modeling how their incentives work and how their rationality works, giving you a better idea of how the system will behave under sort of more realistic conditions.
Yeah, I remember when you first told me about this, it reminded me of Monte Carlo simulations that sometimes retirement professionals do for you, or there’s software that does this to kind of model out with different inputs whether or not you’ll have enough money to last retirement and you know, et cetera. So it will kind of show the different scenarios under different market conditions, but you have a really interesting backstory to how you came into the space. Let’s just start with the first time you ever heard about bitcoin. How did that happen?
Yeah. Definitely. So I was working at this kind of odd research institute called D. E. Shaw Research. So it was a private research institute, and this billionaire, who was formerly a CS professor, was actually spending his fortune on building ASICs, which are application specific integrated circuits, which are the same chips, you know, you use in miners or in GPUs. We were building ASICs for doing physics research, and at that time, in 2011, there weren’t really many people building ASICs.
There were telecom companies who built them for routers. There were research groups who built them for new chip architectures, but most of those didn’t make it into production, and then there was, you know, Apple, Samsung, and so the interesting thing about these types of orders is that if you don’t have an order that’s of a large enough size, fabs, which are kind of these fabrication facilities, mainly in Asia right now, they won’t really talk to you.
So you have less than 100 million dollars of chips that you want to get produced, they won’t really speak to you, and you have to go to these aggregators who take many small orders, and they do a lot of the technical work to make sure that your chips don’t interact with someone else’s chips, and the formal verification and the behavior of these chips is as it would be if you have the whole order yourself, and so we went to an aggregator.
You know, we had this roughly 25-million-dollar chip order, and they said, okay, great, we’ll be back in a few months with kind of the first samples, and then they, more or less, ghosted us for a while, and we were like, hey, what’s up? You know, you took 25 million dollars from us, and they were like, well, you know, actually, how about a 10 percent discount? And we were like, you can’t just give us a 10 percent discount without telling us why you just didn’t talk to us for three months after taking our money.
And that was in 2011, and so that was really one of the first bitcoin ASIC miners who was coming online, and that was when I was like, wow, I really should be trying to understand more about this because I thought it was…you know, not quite a joke, but I thought the paper was a little bit grandiose, you know, for a distributed system style paper, so I didn’t really, truly believe it until I started seeing people building hardware for it. That sort of set…
Wait, and just to be clear, so the fab basically got a big order from somebody who wanted to create a bitcoin ASIC, and so that’s why they postponed your order and pretended like they hadn’t taken your money? Is that what you’re saying?
Yes. I’m sure the bitcoin ASIC order paid them some increased cost over what we paid for the same amount of space, but yeah, we sort of got frontrunned by a bitcoin ASIC manufacturer.
And so then how did you go from, you know, at that point, just learning about bitcoin to eventually here…so that was 2011. Why don’t you fill us in on the last eight years and how you came to launch a company in the space?
Yeah. Definitely. So, basically, I’d mined quite a sizable amount of bitcoin, because, at that time, even then, you were still reasonably profitable with GPU mining, and my parents lived in a place that had relatively cheap electricity, and I just kind of had a computer running in their basement, and I had a lot of bitcoin. I saw the crash of 2013, and I really was scared, and I just sold all the bitcoin I had, and I was like, I’m never getting back into this.
It’s stressful. I have to worry about, you know, security and all this stuff, but I kept paying attention to the academic literature, and I’d really been taken by a couple key papers that had come out in 2013 and 2014 and ’15, and they were the GHOST paper…so GHOST stands for Greedy Heaviest-Observed Sub-Tree. This algorithm is actually one of the key components of the first version of Ethereum. It’s what kind of let Ethereum have a faster block production rate.
But it was the first academic paper that really studied the probabilistic aspects of how blocks travel through the network as well as how different types of adversaries would potentially try to interfere with the growth of the chain in a very formal way, a much more formal way than the original Satoshi paper, which, you know, over time, has been found to have a lot of kind of…at least from the mathematical side, has had a bunch of mistakes, and the second paper was a selfish mining paper by Emin G ̈un Sirer and Ittay Eyal.
And that was also another paper where people really used more rigorous probabilistic tools to try to find a bug, and in selfish mining, what happens is a miner kind of holds out a bunch of blocks that they produce, and then as long as they have an advantage over the rest of the chain, they keep growing on their private chain, and if the rest of the network comes close to them, they publish all of their blocks, and the idea with this attack is it kind of reduces the efficiency of the network and gives all the rewards to kind of the selfish…
A larger percentage of the rewards and they are due to the selfish miner. I tried getting people I worked with excited about this, but I think, you know, there were more traditional distributed systems and hardware people, and they just really were like, well, this is just a novelty and this is crazy people making ASICs in Taiwan. You know, can this thing be real? And then after I worked there for five years, I ended up working in high frequency trading, and that was kind of when I saw the invention of a lot more sophistication, a lot of financial sophistication in the space.
I think the Algorand paper really made me, for the first time, realize that there was sort of a melding of tech and finance in a way that hadn’t existed, because people were finding novel ways to make structured products that, you know, securitized the security of a network in terms of proof of stake, and kind of in trading, when you make a trading strategy, you basically design your strategy. You take the statistical features of it that you think are important that represent what other people are doing in the market or what you want your ideal strategy to be doing.
Then you kind of run these back tests, which are like the Monte Carlo tests you discussed with regards to your retirement account, where you basically can run the strategy on historical data and say, okay, I expect to make X amount of dollars. The standard deviation of the return is this. This is the worst case loss. This is the best case return, and you can analyze how this kind of algorithmic strategy that might just say 0.2 percent of the time, send an order here, 50 percent of the time, send an order here, something that’s not super human intuitive, and you run these simulations and you can get the economic intuitions for what it’s doing.
This is how it makes money. This is how it loses money. This is how statistically broad the distribution of outcomes is under different stress test scenarios, and that was when I started kind of seeing a lot of similarities to what people were doing in crypto in that cryptographers have a method of proof where they basically make an idealized simulation. So they say, hey, let’s pretend there is an oracle, so someone who has full control of the whole network, and the oracle picks an adversary every once in a while and lets the adversary read someone else’s stake.
So it means they hack their computer and read their private keys, and they can sign signatures as them, or they hack their computer and don’t forward blocks, and basically, the cryptography version of simulation, most of the proofs say, okay, let’s pretend we have this oracle. The oracle can call the adversary. It can also call honest users, and then it kind of interlaces actions between them and cryptographic proofs, like the ones in Algorand, prove that with a certain probability the adversary succeeds.
And that probability’s really low under certain choices of parameters, like how big your block is, your block production rate, assumption of how delayed the network is, and that was when I started seeing this kind of gap between modern finance where people really assume a lot more about the user. You assume that they’re rational and they have very different ways of measuring what rational means to them, versus kind of the cryptographic proof version where it’s everyone is trying to destroy everything or everyone is perfectly good all the time.
And in reality, everything is somewhere in the middle, and picking these kind of parameters in your protocol becomes increasingly important in proof of stake, because now instead of collateralizing kind of these assets with proof that you spent energy or proof that you expended a certain resource, like space, you do it with this proof of I locked up this digital asset in this system.
And there are a lot more edge cases there, and it feels a lot more like trading in a lot of ways, and that was kind of how I got into this, and I did some consulting for some layer 1 protocols, and that was when I was like, okay, there’s kind of a generic problem here where the tools of finance, especially in quantitative trading, are really useful for gaining intuition about the economics of these protocols and understanding kind of how different types of people would interact with them.
So you launched Gauntlet last year?
Yes. So, like, last summer. So late August, basically.
And you have funding from?
So our lead investor is a traditional investor first round who basically is like Uber’s first investor and looker, but then we have a few crypto funds, Polychain, Dragonfly, Distributed Global, and then a bunch of angel investors who are really great, like Arthur from Tezos and Lily from [inaudible]
And so you talked, I mean, of just about so many things that we’re going to unpack throughout this episode, but let’s just start with the most basic question. How do you come up with the models for the different types of users? Like, what if you a miss a type or…and it’s not even actually just the types, but how do you figure out which parameters to input for their behaviors?
Yeah. Definitely. So you have the same type of issue with trading strategies where, you know, you might make a trading strategy, and it works really well in simulation, and then you go to live trading, and it doesn’t work, and it’s because you missed some feature of live trading that you didn’t replicate in simulation correctly, and then you add that feature to simulation, and you kind of have this iterative process of, you know, you simulate something.
You go test it out in live. You simulate something, and you go test it out in live, and you make this feedback loop that makes it very data driven. So what you try to do is you try to distill things to simple models that are relatively easy to interpret. So the simplest models are like linear regressions. Like, I am a miner. I have three currencies I can mine on. These are the current market prices. This is how much hash power I have. This is my switching cost. This is my energy cost.
Here is kind of a function that says this is how many dollars I can expect to make under this allocation, and you basically have the miner try to optimize that. That is kind of like the simplest explanation of trying to map these economic primitives to something that is a simple model. Now, with cryptocurrencies, there’s actually this kind of hidden beauty to having a lot of public data. You know, I think you can train a lot of these models based on how well they reflect observed output data.
How well does your simulation’s total hash power reflect realistic hash power? How well do you predict transaction fees, and do these agents you’re modeling, do they actually replicate the observed live data? While it’s not as fast of a feedback loop as trading, there are still a lot of ways of basically taking simple models for how users behave. So describing how they value things, so describing the utility, like in classical microeconomics, and then describing how, based on their utility, they take an action of hash power goes here.
Stake goes here. I will go trade derivatives on BitMEX instead of trying to earn an income in the chain, and you know, you won’t get everything on the first try, but that’s true for a lot of statistical fields. In both AI and trading, a lot of times, you start with a simple model. You see how poorly you do, and you iteratively try to add features to the system such that the features are interpretable. So you understand why the machine is doing a certain thing, and also features such that, you know, there’s some risk assessment associated to them.
They’re not features that are…they tell you they win 99 percent of the time, but you win a dollar 99 percent of the time, but 1 percent of the time, you lose a billion dollars. So you try to bound what the variation in these features is. A lot of this is just basically trying to take techniques from AI and trading and trying to map them to this space, and when I say AI, I’m including deep mined [inaudbile] but also, you know, self-driving car simulation.
Interesting. So how long does it take to do a full analysis then? It sounds like since you talked about it being iterative, is it like a process over a month or several months, or can you do it kind of in an hour?
Yeah, so our goal is to make tools kind of just like you have in artificial intelligence, things like TensorFlow or PyTorch where you enable users, so blockchain developers or people who are interested in understanding the assets they own, to basically run different analyses themselves and then produce statistics. So the runtime of our simulation right now is basically we can simulate hundreds of thousands of blocks in a couple minutes with thousands of agents. Obviously, these agents are relatively simple. You know, we’re not running AlphaGo every time for every agent. They’re not doing some very complicated thing.
But you know, you can run the simulation and get statistics within a few minutes. You can run many different simulations with different parameters and kind of choose the best one. You can run them in parallel. Another way of thinking about how this is useful is you basically can pick the set of parameters you want, run the simulations, get some statistics, and then kind of analyze the statistics, things like how does the reward distribution change after 100 thousand blocks given this initial token distribution?
And that’s a problem that a lot of proof of stake protocols and new contracts face, because they want to figure out how to initially distribute their assets, but they don’t know whether by distributing it in a certain way, they will concentrate all the assets in a small number of holders very quickly or very slowly, and so right now, our analysis is mostly custom to some extent, although we do host different virtual machines, such as Ethereum. So you can give us a contract and you can basically input the contract into the code.
And we have a little kind of programming language that lets you define the different users, and then you can run the simulation, and then in a couple minutes, you’ll have some results. The analogue I like to use is game engines. So game engines, so a lot of video games, basically, the basic core architecture of these pieces of code is there’s a core engine which is the event loop. It manages the events that characters in the game produce, and events mean I shot a gun or I jumped or I did something.
Then there’s kind of the map and the ambient background, which is the set of physics that is common to all characters in the game, and then, lastly, there’s the characters and objects themselves, which are described in kind of this domain-specific language. So in Unity, you use C#. In Unreal, you use C++. In sort of mobile gaming languages, you use Swift or sometimes Python, and there you define a user based on its actions. So, like, you know, your character, if it gets shot, it will run away, or your character pulls out its sword when it sees the sunlight.
Those are the types of kind of little actions you encode, and then when you play the game, you basically are running this thing where the characters move, and when they interact, the physics is constrained by the map and the engine, and our simulation is sort of the same thing, except you replace physics with economics. So the map of the game is the set of economic primitives in a cryptocurrency or smart contract that is common to all users, and then the characters are kind of like the individual actors who describe how they behave under different circumstances.
And then just to also draw out a different comparison, when I first learned about what you do, I kind of thought to myself, oh, this is similar to formal verification in the sense that you’re ensuring that the intentions in the design kind of play out in real life, but the difference is that is formal verification simply focused on a kind of like…how would I describe it? Sort of like discreet actions with a smart contract, whereas what you’re looking at is how this will play out economically?
Yeah. Exactly. Basically, there are really two types of security. Well, I guess you could say three types of security. There’s cryptographic security, like are your signatures correct? Is your zero knowledge proof correct? Is your hash function behaving as expected? Like, is the randomness as random as it should be? Then there’s code correctness. Is your code allowing people to reenter certain functions, like in the DAO hack, or is your code allowing people to send all the value in the contract somewhere else?
And those are kind of like the very extreme edge cases, and formal verification is really good for that, and then the last thing is economic security. Is there enough reward or are there enough reward…is the reward distribution sufficient to ensure that all participants get sufficient return on investment or sufficient reward in however they measure it? And that’s really kind of where we come in, in trying to say, okay, well, your contract can be perfectly correct. It can be running on any blockchain you want.
It could be running on Polkadot. It could be running on Telegram. It could be running on Facebook, but you’ve chosen parameters, your interest rates or how you distribute rewards in such a way that users are not really incentivized to participate in your network. They don’t feel like they’re getting what they want, and that’s kind of where running simulation can give you a lot of the economic understanding that you can’t really see even just from looking at the source code.
And you kind of mentioned your business model a little bit. It sounds like you’ve started doing kind of custom work and then maybe are transitioning to eventually having software that maybe teams could license or something? Did I understand that correctly?
Yeah. So we plan on selling a hosted service where it’s like a hosted developer environment, and you can access the libraries there. So you can input your contract or give a URL to your contract and potentially directly to your client for some protocols, and basically, write agents. So script the agents the way you would script the character in a game engine. Like, this user behaves this way when the bitcoin price is greater than X, or use agents from a library that we’ll have.
So you can say, okay, I want agents who are risk averse and kind of rational, and we’ll have some examples of different types of users, and you can use them, and then you can run these simulations. So the business model is sort of a little more…it’s more of a traditional SAS type of business. You know, I think there are a lot of benefits to kind of doing it this way from the perspective of having statistical transparency about what the models are, but also handling the infrastructure and handling the background execution and making sure that things work because there are a lot of moving parts when you try to build these kind of big systems and simulations.
And there are companies that have been quite successful at doing this. In gaming in particular, there’s a company called Improbable, which basically sells a simulation service where game designers can run millions of simulations and stress test the AI and automated characters in the system.
All right. So we’re going to discuss proof of work versus proof of stake and some of the other factors that Gauntlet will look at after the break, but first a quick word from, first of all, me, but then also our fabulous sponsors.
You may have heard, Unchained is doing a survey. We want to know, how do you think we can make the show better? How would you like to see Unchained expand? If you could just take a moment and go to https://www.surveymonkey.com/r/unchainedsurvey2019
Today’s episode is brought to you by Kraken. Kraken is the best exchange in the world for buying and selling digital assets. With all the recent exchange hacks and other troubles, you want to trade on an exchange you can trust. Kraken’s focus on security is utterly amazing, their liquidity is deep and their fee structure is great – with no minimum or hidden fees. They even reward you for trading so you can make more trades for less.
If you’re a beginner you will find an easy onramp from 5 fiat currencies, and if you’re an advanced trader you’ll love their 5x margin and futures trading.
To learn more, please go to kraken.com.
Did you know that if money laundering were an economy, its GDP would be the size of Canada’s? Large volumes of tainted crypto assets move through financial networks, often below the radar of banks. Cyber-criminals use unregulated crypto exchanges to avoid detection. No wonder governments around the world are rolling out tough new anti-money laundering laws for cryptocurrencies. Complying with those laws isn’t easy. Banks and exchanges need the best cryptocurrency intelligence available, to avoid penalties. Now you can use the same powerful AML and compliance monitoring tools used by regulators. CipherTrace is Securing the Crypto Economy. To learn more, visit Ciphertrace.com/unchained.
Back to my conversation with Tarun Chitra of Gauntlet. So one of the things just earlier when you were describing how these systems work or how you analyze them, I just got curious, is there any particular parameter that has the biggest effect in determining how a system functions, because you mentioned a bunch of different random things, like block time and in a second, we’re going to talk about proof of work versus proof of stake, so, like, consensus algorithm, but are you finding that there’s a single decision that tends to have some huge effect?
I think the thing that protocol developers right now are a little bit maybe…are starting to see is a very important economic decision is how transaction fees are computed. So, in bitcoin, of course, with kind of the fully deflationary model, transaction fees are tending to have an increase in surge pricing. So as the block erode decreases, you start to see huge increases in transaction fees when the mempool gets clogged. So the mempool is kind of the set of transactions that a miner sees that they can include into blocks.
Also you’ve seen this with Ethereum where, in December 2017 during kind of the craze, CryptoKitties kind of spanned the network and no transactions were going through, and in fact, I mean, you’ve seen a bunch of other times since then, but I guess that was kind of the most famous one where transaction fees went really high. So gas cost, which is a cost for running an operation, spiked a lot, and users kind of got very poor quality of service. So designing really good transaction fee models and modeling for, like, how pricing should adjust with demand is really going to become way more important as these systems increase in use.
Yeah, and I think that was a factor in the DDoS in Ethereum in 2016, right?
Yeah. I mean, it’s the easiest way to kind of, you know, cause malfeasance with low capital cost. It’s much cheaper than a 51 percent attack, and you can just ruin the quality of service for a large portion of the network for a not very large company.
Yeah, and it appears to be a lot more effective than a 51 percent attack, but well, at least with Ethereum Classic last fall, we saw that it didn’t really have the same effects in real life that the game theory had posited, and I was actually talking about this with Olaf. I can’t remember if this was in…yeah, I think this was in the recording that we released from our conversation at Consensus.
So for those of you interested, you can listen to that episode, but I mentioned to him, oh, you know, that’s an example of how the game theory didn’t play out, and he mentioned that, well, you know, that just shows the difference between game theory and behavioral economics. So, for you, when you’re making your models, how much are you using game theory, and how much are you using behavioral economics to make the models, and what do you think is more important?
Yeah, that’s a really good question. We basically try to take an approach that’s in the middle of the two, and I think that’s how algorithmic traders also deal with trading strategies. Basically, you try to map to historical data as well as possible, and historical data almost always, in trading scenarios, doesn’t reflect the optimal game theory choice. People don’t choose the best transaction fee, or people don’t choose the best execution fee when they’re trading on an exchange.
Oftentimes, they choose what’s easiest or what is kind of what everyone else is doing, and you have this pseudo, you know, short-term tragedy of the commons type of behavior, which is way more akin to allegories of behavioral economics, like things and thinking fast, and so that book by Daniel Kahneman, the Nobel Prize winner. So we try to make models that incorporate, certainly, a lot of rationality where it’s people who are measuring statistics, and they’re optimizing for the statistics.
They have a function that tells them what their value is, and they optimize for that, but then we also add in a lot of kind of these known mistakes that humans tend to have where they miss-order or miss-rank the relative rankings of goods that they want to purchase, and we try to basically add in some noise that looks akin to how behavioral economists analyze real economic data.
And for that, do you also throw in ideology? Because, as we’ve seen, that has led some people in the crypto space to do things that are probably against their financial interest, self-interest?
Yeah, absolutely. I think one of the most successful agent models from kind of the ‘80s and ‘90s is a model called Byzantine Altruistic Rational, and you know, you could argue that everything kind of falls in this framework, where Byzantine is just someone trying to destroy the system. Altruistic is, you know, the religious believer, the true Hodler, and then Rational is the person who is looking at the derivatives market continuously and trying to, like, collateralize their perpetual swap on BitMEX, and you have to kind of model al three of those.
You have to have people who are willing to lose a lot of money, especially in the bootstrapping phases of networks when it’s just very unclear how the demand distribution will tend to evolve over time. So it’s not perfect, but you’re trying to kind of build this incremental set of updates where, you know, you first start with an okay model, and then you see how it performs. Then you add some features that correct for the observed behavioral differences, and then you kind of incrementally improve.
I mean, so we’ve talked about how Gauntlet measures for security. You kind of vaguely mentioned inequality in a system. So what are the main factors that these teams are usually attempting to optimize?
Yeah, so, in proof of stake systems, there are just way more knobs. There’s a sense in which proof of stake systems really are kind of structured financial products. You know, you’re getting a fixed income in theory or an expected fixed income, but you can slash, or you can lose your deposit, or you can basically not participate in the system because you messed up some type of forwarding or signature or aggregation event.
And in these systems, you kind of have something that looks like a Frankenstein structured product from, you know, in some ways, ironically, 2008. A proof of stake system is a bond where…but it’s also swapped sometimes because, you know, you’re swapping interest rates with other validators. If they happen to be a fisherman and they provide a fraud proof for you, you’ve lost out on some reward you were supposed to get, and you basically swapped with them.
And I’m not trying to color how regulators will eventually think about these, but they are very similar, and those products, one of the reasons you needed such high fidelity to some extent, financial engineering, which went wrong, of course, was because there were so many knobs you had to turn. So what is my emission schedule? What should my interest rates be?
How should I discourage validators from going offline? How should I encourage validators to stay online? These types of problems are a little easier to reason about in proof of work because the ROI is much more clear. It’s much more tied to how much hash power do I think there is? How much is my energy, and how long can I go losing money? Versus in proof of stake, a lot of it is, like, do I even understand what the economics of this system are?
Is it possible that I can get inflated away to zero even though I’ve contributed a lot in the beginning, and I think that’s really where people are quite worried and do really want to kind of tune these parameters to encourage healthy demand and also reward early participants for joining early without overly incentivizing them in a way that basically means that no one will use their network.
Well, I mean, that’s kind of concerning given that…I mean, so I wouldn’t necessarily say that there’s already a shift towards proof of stake systems, but there’s at least a proliferation of new projects that are attempting proof of stake systems. So do you kind of foresee that we’ll see a lot of these projects losing money for people over the next few years until there’s kind of like more learning around how best to design such a system, or do you think that it’s really just better for projects to stick with proof of work or you know, Decred has a hybrid system?
Yeah, so I think these experiments are really invaluable because they’re teaching us a lot about structured products, and I want to make kind of a historical analogy to explain why it’s real useful that we’re trying them. In the 1970s, really the late ‘60s, options were invented for US equities. So options are basically the right to…well, exchange traded options. Options were kind of traded over the counter before, and options were known as kind of the wild land, kind of the way people treated cryptocurrencies and derivatives at that time.
But in the ‘80s, basically, these economists figured out a statistical model that explained how one can tie the value of the option to the value of the underlying equity. So in an option, right, it says, okay, I want to buy Apple stock in one month, and I want to pay this price. Will you sell me the right to do that? So I pay a fee up front, but I kind of lock in my price for that stock in a month, and the market is figuring out how to set the up-front fee.
So that’s what the option sort of price is, but because there was not a common ground for traders to compare options for, say, Apple versus Google versus HP…well, other than HP, none of those companies existed then, but you know what I mean. Some companies that were on equities exchanges then, it was very hard for traders to actually reason about the risk they were holding, and then once this formula, which is called the Black-Scholes formula, came out, you started to see a huge increase in demand for options products because people could figure out how to quantitatively reason about the economics of them.
And I foresee kind of the same thing happening for proof of stake where, over time, the models for understanding the reward distributions will just strictly get better, and there will be something akin to the Black-Scholes formula that makes it easy to understand what the volatility, the expected income is, how these different products interact with each other, how surge pricing works. Like, does Polkadot provide a better return to fishermen under surge pricing than Cosmos?
But do you get better average income on Cosmos versus Polkadot? Like, those types of little nuances will be able to be compared and understood by participants as long as we keep moving the economics modeling forward, and that’s really where I see that the proof of stake, you know, experiments providing us the Black-Scholes of the space. Like, how much is sybil resistance worth, and how should we price it, and what derivative products do you need to hedge risk?
Okay, so it sounds like you wouldn’t necessarily say that the complexity of a proof of stake system means that proof of work is simply superior for a token? Is that right? It sounds like both sort of have their place.
I don’t think there is sort of an obvious way to compare whether one is superior to another. I tend to think that the special purpose blockchain model, it does kind of make sense where you have a bunch of different chains that are specialized for certain applications, and there’s some way of them interacting makes the most sense, but how to value all these different assets under one umbrella in, say, a Polkadot manner where you’re sharing security can be quite difficult, and I think we’re not at the point where there is a notion of volatility that is well understood for these assets.
I want to also just say, so, for proof of work, I’ve heard you talk about how difficulty adjustments can affect the simulation. Can you talk about that?
So selfish mining, the kind of attack where you hold out a bunch of your hash power and you get to make the separate chain that gives you all of the Coinbase transactions, all of the main black rewards with higher probability than you should, is actually very closely tied to how the difficulty adjustment works. Difficulty adjustment is basically kind of this way of the system trying to, on average, estimate what the correct total hash power is.
At the end of the day, the difficulty is totally tied to the total hash power in the system, and the system rebalances how it redistributes rewards, more or less, based on this adjustment. Now, if you are a selfish miner, you’re holding out a significant portion of hash power. Like, selfish mining is, say, effective at 20 to 30 percent of hash power. You’re holding out 20 to 30 percent of hash power, so the system is measuring difficulty incorrectly because it can’t actually see how many blocks should have been produced.
So it says, okay, I’ll make the difficulty lower because it looks like there’s 20 percent less hash power, and so making sure that you get the difficulty adjustment correct in simulation and that the selfish mining behavior, which you do know can occur is correctly represented is very important to getting an accurate simulation and proof of work, especially given that you want to make sure you get the forking probabilities right. You want to make sure you get kind of the…you want to allow miners to withhold blocks, because that is one of the best known attacks that does affect all blockchain systems.
Yeah, and even though we were talking about how proof of stake has a lot more different factors that can affect security for proof of work, you know, difficulty adjustments are pretty regular. Like, with bitcoin, it’s every two weeks. I don’t know how often it is for other systems, but you know, this isn’t like it’s something that just happens once a year or something like that. So there’s a lot of opportunity for attack. So something else that you mentioned earlier was about the initial token distribution and how that can affect inequality. What are you finding in terms of what are some of the good ways to distribute tokens so, that way, you don’t have a lot of inequality in a system?
Yeah, so I think especially in sort of the more financial product assets in the system, like Maker, you have a lot of different stakeholders who have tokens. You know, you have the CBP holders. You have the Dai holders. You have MKR holders, and you sort of have people who are the keepers, and in all of these systems, you have a bunch of different types of users, and figuring out what ratio of the rewards those different types of users should get is really important.
I think token distribution for layer 1 blockchains is a little more straightforward, but token distribution for, say, Maker is actually kind of interesting because the question is will one set of users be able to hold the other set of users hostage if they don’t get enough rewards, and where is that transition point where, say, the keepers of the system who are checking for defaulted CBPs, if they don’t get enough ROI, is there some point at which they’ll just stop doing that and then there’ll be a ton of defaults?
So that’s kind of the stuff you can measure…you can use simulation tests to see…if you’re assuming that, say, keepers are rational where they will only stay in the system if they get enough rewards measured in dollars, then maybe they need X percent of the initial token distribution. Those are the types of things that I think are really important for these systems because, at the end of the day, people are participating in this for some type of ROI.
Whether the ROI is in the chain asset or whether it’s in dollars or whether it’s external, you know, those can very, but people are doing this because they are trying to work and participate in these systems to gain some rewards, and making sure that those reward distributions are equitable enough that the system gets increasingly difficult as these systems get more complicated.
And what about projects that have on-chain governance, is that something that is the situation where using something like Gauntlet would be maybe not exactly necessary, but you know, where it would be pretty imperative?
Yeah, I think, you know, everyone desires having educated voters in political systems, and in proof of stake and governance systems, you have a new type of voter where a voter is this financial voter, but they don’t really have a great way of being educated about the decisions that they’re voting on. Right now, if you think about it, most governance votes…a few open source developers, at the end of the day, still are the only ones who understand the true changes to the code.
And most of the rest of the governance process is filtering through Twitter and filtering through Reddit and all of these different forums where people can read commentary on what these changes mean. We would like to be a neutral third party, an independent source who owns no tokens, who you can use to basically put in different parameters that represent yourself. Like this is how I believe I would participate in the system, and how does that change under all these different governance options? So does Tezos at X percent interest rate versus 2X percent interest rate affect my personal utility in a negative way, and if so, then I will vote that I can use that to be an informed voter.
So we envision a world where we can help provide users kind of this neutral, independent way of measuring the economics of these systems, and a feeling in which they can really be confident in the decisions of the things they’re voting for, which will be especially important over time. In the US equities market, it actually has taken a very long time for voting participation to break 50 percent. It’s something around 72 percent of equities are voted for right now, but most of that is because there are proxy voting services.
So there are services that your broker may use, and you may not even know they basically vote based on some independent research. I suspect there’s going to be some hybrid system for governance in the crypto space where there’s this proxy voting aspect, but there’s also a lot of independent voting, and we just kind of want to be a place where you can get a lot more information and be an informed voter.
And earlier, you talked about there’s the factors that you have to consider at the protocol level, but then also you have to factor in how the exchanges…how they’re affecting behavior. So when you model that, are you literally kind of doing things like, okay, here’s a trader on Kraken, which allows this type of leverage, or here’s a trader on BitMEX, which allows this type of…are you doing stuff that’s that detailed, or is it…how do you model that part?
Yeah. Absolutely. We don’t necessarily do full order books just for speed reasons, but we’ll model kind of the inside levels of exchanges. So take historical data for, say, BitMEX or Coinbase, and our simulation will basically provide that, almost like an oracle, to all the different agents, all the different users in the system, and they can use that…they can decide whether they want that as an input into their value function.
So their value function, say, for a bitcoin miner might be if bitcoin’s price is greater than 10 thousand, sell all my bitcoin, but if bitcoin’s price is lower than 3 thousand, divert all my resources into mining, which kind of represents their risk profile. They want to be in dollars when bitcoin is really expensive. Like, they got out, but when bitcoin is cheap, they want to be accumulating in some fashion, and so in order to correctly do that, you need to take historical data, like market prices.
And then you need to also model how these agents will respond to those market prices. So we take historical data, and this is the part where I think…my cofounder and I, we both were in high frequency trading for a long time, and then he also worked in self-driving cars, and we basically took a lot of the tools that we use for back testing your strategies and mix them in with kind of how you would stress test a protocol.
And as the space gets developed more and there’s more derivatives that are built on crypto assets, how will that affect your analyses?
Yeah, I think derivatives are actually, in a lot of ways, the thing I’m most excited for, to some extent, in the space. I think there are a lot of really novel products that are coming out right now, like hash powered derivatives, where miners can basically sell a future for future hash power, and then as they turn the hash power on, then they basically give, in a continuous fashion, all the bitcoin they earn over that time period back to the person they sold the future to, and people are doing the same thing for staking right now.
Those assets are really hard to price, and they’re really hard to…especially with staking, and so we kind of view simulation as the only way you can really figure out how you should price those things, how much risk you want to hold, and whether, you know, you think there’s actual long-term value in being a participant who hedges your risk. I think derivatives are going to be really important for validators who basically are exposing themselves to risk in multiple chains.
And they want to just kind of make their income look constant or look roughly constant, and you can back test against derivatives, and I think one of the most innovative things that’s happened in the space has actually been things like perpetual swaps, which you don’t have in the normal futures market where you kind of have these consistently rolling future contracts. So you can kind of get in and out of these derivatives at all times of day and without any kind of huge adverse selection.
And we kind of talked a little bit about DeFi and the ways in which some of the things you can do in DeFi remind people of the different behaviors that led to the financial crisis. So when you look at that, are there any particular design choices that some of the teams are making that you think they could make in a certain way to prevent that from happening again?
It’s hard to say. You know, in every crisis, historically, you always find some very complex loophole that people take advantage of, and I think the nice part about crypto right now is that we’re able to…you’re in this exciting time where people are really doing these experiments, and they’re not really that big. People talk about, like, X million dollars locked up into a contract.
Relative to the size of the experiments that are happening in conventional finance, these are kind of very small, but yet we’re already getting quite rich data about their failure modes. I don’t think there’s any particular individual thing that I would say is easy to fix, because the DeFi products that I’ve kind of seen, they all are really focused on trying to help long holders of Ethereum get exposure to other things without being able to lose their Ethereum exposure.
And at the end of the day, that’s not going to ever really be that different than a lot of the stuff from 2008, and I think the best thing you can say is, well, it’s a more transparent version, and I guess it’s way more over-collateralized, but I guess, like I said, I think these are small size experiments where we’re getting a lot of really good data on how to improve the next generation, and that’s kind of the stuff I’m really excited about, is that people in finance are too conservative to do these types of things.
And I was wondering, do you think that the DAO would’ve occurred if…sorry, the DAO hack would’ve occurred if that project had been able to use Gauntlet first, or is that more of like a formal verification issue? I wasn’t quite sure.
Yeah. Yeah, that’s definitely more of a formal verification issue. So the DAO hack was kind of beautiful, but it also had to do with the sort of reentrance-y, which means that you could kind of get back into a certain function call repeatedly, but I think formal verification has a lot of performance issues. When we were building ASICs, even the hardware formal verification is definitely different than software formal verification. The graph algorithms that are at the core are the same.
And those are where the real, you know, exponential slowdowns come up, and a lot of things in practice end up doing a combination of formal verification for very core parts and then statistical verification for everything else, and that’s kind of where I view how Gauntlet’s complementary to formal verification. We really kind of tell you, hey, if you’re in the DAO, this management fee that you’re paying to the contract is too high because no one else…you know, most of it’s accruing it the first five users, and then the rest get none of it.
All right. Are there any particular projects out there right now that you think have particularly well designed crypto economics?
I think Maker with the Dai savings rate is a really, really great project. I think that, you know, it’s going to be much more complicated with multi-collateral Dai, especially because each asset is going to have its own set of parameterizations, and the users are kind of going to have to be involved in picking those parameterizations, and it will be messy, and I think the Maker team is really quite thoughtful about these design decisions. I think some of the live networks who have had spam problems, so like Stellar and Ethereum, they’ve really learned a lot of lessons from these attacks.
And like, the collective learnings from all of these attacks is going into the next generation systems, but on the layer 1 side, I wouldn’t say that I have any particular protocol where I think the economics are really well designed. It’s really hard to know, given that the demand distributions are really hard to predict. It would be like going to Uber in 2010, like when they first started, and saying, hey, I think you guys are going to need a really good surge pricing algorithm just to tip, and here it is, and they would probably look at you and be like, we just need to figure out how to get drivers onto our platform. Do you know what I mean?
So I think it’s a little early for us to say, because everything will boil down at the end of the day to the true demand distributions. Gauntlet’s mission is just to be a little early, but to really start the modeling aspects of that so that when there is this demand and…you know, I think we’re all hopeful about Facebook bringing some of that demand to the ecosystem in general. There is a way for people to understand what numbers they’re choosing and how they’re voting and what they’re buying.
All right. So last question. What do you think are the biggest questions or design issues that crypto teams are not thinking about that they should?
Yeah, the number one thing I think that people kind of ignore is the existence of derivatives, and I think the CFTC had this request for information about Ethereum that closed in February I believe where they gave 25 questions about different aspects of Ethereum. So they wanted to understand what was Ethereum? What’s a smart contract platform? Why would users user it? And I think the crypto Twitter world kind of derided the CFTC.
They’re kind of like, oh, they’re dumb. They couldn’t read anything. They didn’t know anything, but I think they’re just very complete, because the last set of questions they asked, in particular questions 17 through 25, really focused on the effects of external derivatives markets on the security of proof of stake systems, and I think proof of stake systems are just very easily liable to attacks from well-functioning derivatives markets.
Right now, the proof of stake derivative market is much smaller than the underlying market, so you can’t do a synthetic 51 percent attack because there’s just not enough liquidity, but I think in a world where you have something like the S&P 500 where there’s a lot more liquidity in the futures market on ES, which is the e-mini S&P 500 Future, than there is on the underlying, you can actually have people selling the rights to more than 50 percent of the staking asset off chain, and then someone could be aggregating that indirectly.
And I think thinking through these types of attacks has been kind of less…really nonexistent I think in the crypto community, possibly because they just…you know, it involves thinking about the financial aspects, and there’s definitely a little bit of ickiness I think for crypto protocol developers and cypherpunks to kind of think about these things for some reason to them. Philosophically, they feel like it’s icky.
Yeah, I agree with you on that. It’s kind of interesting to watch. All right. Well, where can people learn more about you and Gauntlet?
Yeah, so we have a website Gauntlet.network. We also have the series of blog posts where we kind of go through the different aspects of what make blockchains secure and how we think about security and modeling users and a little bit about how algorithmic game theory and trading kind of influence how you should model users, and we’ve published an academic paper with one of our customers, and we’re going to have a couple more coming out soon. So we’re trying to really put out a lot of content to make it kind of salient. So, over the next few months, I think you’ll be seeing a lot of stuff out from us.
Great. Well, thanks so much for coming on Unchained.
Yeah, thank you.
Thanks so much for joining us today. To learn more about Tarun and Gauntlet, check out the show notes inside your podcast player. If you have not yet taken the Unchained podcast survey, please now go to https://www.surveymonkey.com/r/unchainedsurvey2019. Your answers will be a huge help to me and my team here at Unchained. Unchained is produced by me, Laura Shin, with help from Fractal Recording, Anthony Yoon, Daniel Nuss, and Rich Stroffolino. Thanks for listening.