WBD258 Audio Transcription

WBD258 - Banner - Large Banner.png

BTC vs ETH Technicals with Andrew Poelstra, Tadge Dryja, Vitalik Buterin & Patrick McCorry

Interview date: Thursday 3rd September

Note: the following is a transcription of my interview with Andrew Poelstra, Tadge Dryja, Vitalik Buterin & Patrick McCorry. I have reviewed the transcription but if you find any mistakes, please feel free to email me. You can listen to the original recording here.

In this episode, I am joined by Andrew Poelstra, Tadge Dryja and Vitalik Buterin and Patrick McCorry. With Paddy as co-host, we discuss the fundamental and technical differences between Bitcoin and Ethereum, scalability and use cases.


“The point of Bitcoin is to be sound money… we care about scalability, we care about privacy, but soundness always comes first.”

— Andrew Poelstra

Interview Transcription

Peter McCormack: Right, big group of us this evening, so welcome to the show.  How are you all?

Andrew Poelstra: I'm doing well, how about you?

Peter McCormack: I'm doing pretty good.  How are you, Tadge?

Tadge Dryja: Good, yeah.  Lots of fun, almost Fall, getting cooler, it's nice.

Peter McCormack: Paddy, are you well?

Patrick McCorry: I'm good too by coincidence.  Yeah, I'm pretty well.  I'm enjoying the isolation, it's been pretty nice.

Peter McCormack: I'm not; I'm used to travelling.  I'm used to hanging out with these people in person.  Anyway, I did a show recently where I had Vitalik on with Samson and depending on your position, you either thought that was a very good show or it was an absolute car crash.  Samson did exactly what I expected him to do, but it didn't allow some more nuanced discussion in some areas.  

So, I spoke to Vitalik and said, "Is there anyone else you would like to talk to where we could maybe just kind of advance a couple of areas?" and he said, "I'd be very happy and interested to talking with both Tadge and Andrew, I respect them both greatly".  And, Andrew and me, and Tadge and I have also recorded some great shows.  And then obviously, Paddy, you put out there you were willing to help moderate, which is great, because an awful lot of things tonight we will talk about I won't understand.  It's the first time I've had a co-host for a show like this, which will be interesting.  So, I'm going to hand over a lot to you, Paddy, but I will jump in occasionally and just say, "Look, come on, I need a bit of a better explanation than that".

The only starting point that I really have is just, this is going to be kind of a repeat of a question for you from last time, Vitalik, but I also think you should answer, Paddy; but it's the same question, but for the other protocol for both Tadge and Andrew.  And, this is my starting point and then I'll hand over to you, Paddy, is that okay?

Patrick McCorry: Yeah.

Peter McCormack: So, one of the things that came out of the last interviews, people said, "You framed it as a kind of Bitcoin vs Ethereum show, Pete, and they're trying to do two different things", which is a fair point.  But, it's really interesting to hear people, from their opinion, what they think something's trying to achieve.  So Andrew, starting with you, with Bitcoin, for you what is Bitcoin trying to be and what is it trying to achieve?

Andrew Poelstra: So, that's an interesting question that has a bit of history to it, I guess.  So originally, of course, Bitcoin was created by Satoshi.  The first block that was mined had that quote from The Times, The London Times article, talking about bank bailouts in 2008 and 2009.  And, that sort of established the sort of ethos of Bitcoin as being sort of a libertarian, sound money.  

The goal of Bitcoin was to be an alternative form of money where the inflation schedule, or where the monetary policy, was set in stone rather than being controlled by political consideration or by central banks or whatever.  And weirdly, the idea that Bitcoin was electronic was almost downplayed; it almost wasn't an important thing.  The idea was that it would be some sort of money that anybody could use around the world and that also, you could use it online.

So, right now, the money that we're familiar with using online is basically you're using a credit card, you're writing cheques or whatever.  The money that you can use on the internet is not a bearer instrument; Bitcoin is a bearer instrument, meaning that you can hold it; if you physically have the Bitcoin, then you control the Bitcoin.  And, unlike other objects where, when you physically hold them you have them, you are able to transact this across the world using the internet.  So, you are not forced to go to some third party to enable your transactions.

So the point of Bitcoin, to be brief, is basically to be sound money.  And, there are other goals that we have now which are obviously about scalability saying, "We care about privacy, but soundness always comes first", in a bitcoiner's mind.  And, money also always comes first.  Bitcoin isn't trying to be a world computer; Bitcoin isn't trying to be like a token platform; Bitcoin isn't trying to be like a DNS server, or like a file storage place, or any of the other ideas that people have come up with over the years and tried to use Bitcoin for.  It is trying to be money and it is trying to be sound, and I guess that's my answer.

Peter McCormack: I have just one thing to ask, to add on to that.  You said it's trying to be sound money, but we also care about privacy, we also care about scaling; but, would you not say privacy and scaling are characteristics of sound money?

Andrew Poelstra: So, when I say soundness, what I mean very specifically here is that you can verify the history of the system; you can verify the current distribution of money, whatever that looks like.  It is the honest distribution of money.  Nobody, like, came in and shuffled stuff around, there were no secret meetings that caused the money to move around.  You sort of know where it's coming from and where it's going.

I would say privacy and scalability are critical to being useful, are critical to being a money that enables a bunch of other societal goals, but just the word "sound", I mean that in a very narrow sense.

Peter McCormack: Okay.  Tadge, how about yourself?  Would you agree with Andrew, or do you have anything else to add onto that, any different opinions?

Tadge Dryja: Yeah, I can add a little.  I generally agree that that's what it's about, you know, sound money.  It's the blockchains.  Where I work, I guess, a lot of times I talk to people about blockchain and there's a lot of students of the business school and things like that, and they're like, "Oh, but there's this blockchain stuff, and you've got all this data and you've got traceability, and you've got these things".  It's like, yeah, those are all bad, those are all liabilities.  "We can do identity on the blockchain", I'm like, "Yeah, no".  All of these properties are detrimental to Bitcoin, right.  The goal of Bitcoin was, as Andrew said, it's all money and all of these technical things that people have sort of latched on to are sort of like warts on the system that we are trying to get rid of, and will probably never totally get rid of.  So, yeah, the blockchain's a liability; the traceability is a liability.

I also think that the "set in stone" part is really interesting.  I personally don't really care about, you know, only 21 million Bitcoins, and maybe that's sacrilegious.  But to me, if Satoshi had said, "Okay, the block reward is 1 Bitcoin per block forever", okay, fine.  To me, it doesn't really make a huge difference.  But, I understand that one of the properties of Bitcoin is that it's really hard to change and the set in stone, to some extent, is a societal thing, right; anyone can change the code and run different stuff, but the fact that we all sort of agree that we can't means that we can't.  And so, I'm trying to defend the monetary policy of Bitcoin, I guess, even though I don't really care about it.  So, that's sort of an interesting aspect of what makes it sound money.

Peter McCormack: That's a very controversial point.  I think that you and I will have to talk about that another time.

Tadge Dryja: Okay, yeah.  I think it isn't as big of a deal as most people make it out to be, but yeah, I'm fine with it.

Peter McCormack: Okay, so just then flipping over to Ethereum, and sorry Vitalik, because I know you did this last time, but perhaps there'll be a new, improved version; but, can you answer the same question for Ethereum?

Vitalik Buterin: Yeah, so in Ethereum, the protocol and the idea was born in this kind of environment where you had Bitcoin, you had Naimcoin which was this earlier version of basically a thing that we call, well a predecessor to things like ENS and Handshake and other things that exist right now.  We had coloured coins trying to issue assets on blockchains and other protocols.  And so, the original goal of Ethereum was basically just trying to see if, can we basically take a blockchain and take the properties that a blockchain provides and generalise them more.  And basically, instead of just being able to create a system of money where you can verify the history, ensure that the rules are going to be followed, ensure that the rules are going to be hard to change and all of these things, I tried to apply that to other kinds of applications as well.

And I tried to create a platform where you can build applications more generally, where you can agree on a set of rules; and unless the people who are changing, or who are participating in the application, are going to agree to basically either operating using the application's own rules, or just switch to another one, unless they do that, the rules that they are participating under are going to be the thing that they agree to and there aren't any kind of sneaky back doorways to go in and change it.

And so, the idea is something that is obviously valuable for a currency; it's valuable for a lot of financial applications going beyond the currency; it's valuable for potentially non-financial applications, things like domain names probably being a big one; so to just to create this open playing field and see what people end up building with it.

Peter McCormack: So, Paddy, you're a fellow Irishman, although you're from the northern part, right?

Patrick McCorry: Yeah, I am from Belfast, yeah.

Peter McCormack: You can tell by the strong accent.  Some people don't realise, but if you have some Irish in you, you know both.  Listen, Tadge and Andrew have both been on my show before; people know who they are and people will know Vitalik because, well he's Vitalik; come on!  But, not everyone listening will know who you are.  Can you just let people who you are, what you do, Paddy?

Patrick McCorry: Yeah, so I got started in the cryptocurrency world back in 2013.  I was an undergraduate at the time studying cryptography, and my PhD advisor, soon to be PhD advisor, told me about this thing called Bitcoin being used in the dark web and I thought, you know, that sounds quite interesting, let's jump into that.  So, when I first discovered Bitcoin, there were really two different narratives that were happening at the same time.  One was the payment narrative, you know, where you go on Silk Road, you buy some, whatever I don't know, candy on Silk Road and it gets sent to your house; and as advertisers global decentralised anonymous currency, and as we learned in the next year or two, it's the most traceable public currency in the world, so that was the payment side and that was what I got interested in.

And then there's the second narrative that both Tadge and Andrew were talking about, which is this protection from fractional-reserve banking and limited inflation, and being unable to see the monies they're plying being sound money.  But the question is, why are cryptocurrencies, why do they give you this sound money property?  And that's because of the cryptography in the sense that, I can download the entire transcript of Bitcoin, I can download all the transactions that have ever occurred, and I can independently verify everything that's happened in the past.  And, that's the kind of strong narrative that comes out of Bitcoin.

Now, so far on Bitcoin, that's only really useful for financial transactions, where you want to send coins back and forth.  I'm partly for Lightning, which we can talk about soon, which is like a special type of smart contract.  So, what got me excited about Ethereum was that you could take this idea where I could verify everything that Counterparty is doing, I can hold them accountable, and you can actually extend that functionality.  So, one great example of that is Uniswap.  I can go on the Uniswap website, I can swap, you know, my ETH for Sushi, and Sushi to some other terrible cook; but, I can do it in a way where I don't have to trust anyone and I can verify everything that's going on.

So, now what you have are actually verifiable agreements where you don't have to trust a counterparty.  So, it sort of takes the idea of Bitcoin, where everything is verifiable, and then brings that forward to Ethereum, which is what got me interested in it.  That's sort of, why is Bitcoin interesting and why is Ethereum also interesting.

Peter McCormack: So, the angle I'm always interested on these shows is, and I expect today we're going to go quite technical, and I don't think I always need to repeat how untechnical I am; but, the reason that I am interested in these shows, and for my audience, is based on the information that you may provide today, people might treat this as an opinion that might help them make a decision whether they want to invest in something or not.  But at the same time, I do understand today we might get into some quite technical things.

So, I'm going to hand over a lot of this to you, Paddy.  I might dip in occasionally and say, "Come on, explain that a bit for me".  For anyone listening who doesn't know me, I've invested a lot of cryptocurrencies historically; I don't any more.  I only hold Bitcoin, and that's all I care about, but I have bought Ethereum in the past, a long time ago, and I did make some money of it, but I took the podcast to Bitcoin only and that's what I'm focussed on now.  I don't have a huge interest in Ethereum, but I am willing to listen to these conversations and it would be really interesting just, for me, this is just very interesting to see Tadge, Vitalik and Andrew all talk about things, and I don't know what's going to come, but I'm very interested in that.  So, Paddy, I'm going to hand over to you and I will just dip in as and when I think it might be required.

Patrick McCorry: That's awesome.  I can sort of start off with this sort of Bitcoin to Ethereum transition now.  So I think throughout the show, we're going to cover really basically three topics: one is the origin story of Bitcoin and Ethereum, which we've sort of just done in a way; then we're going to move into sort of how both systems are Frankenstein-like systems and they're all sort of really ugly and horrible under the hood, and how well can we build self-custody protocols on tap.  And when I say self-custody, it means like an in-track with Counterparty; I don't trust them; I still hold full custody of my coins.  And afterwards, because everyone here is basically a researcher, obviously, so what I'm going to move on to is sort of future scalability at both networks, and how they are both approached by everyone on this call; so, I think that will be quite interesting for everyone.

But, what I want to first start off with is the Bitcoin world, so Bitcoin to Ethereum.  So as we know, Bitcoin had these narratives, protection from information and frational-reserve banking; that was clearly in the whitepaper in the genesis block; it was on the Bitcoin website in 2009 to 2010, 2011.  I think it was removed around 2012; and also this idea of this truly peer-to-peer payment network.  But as sort of Vitalik alluded to, there was this third thing involving Bitcoin, which Satoshi never really pawned out, with really several new applications which were coloured coins, which eventually became OmniLedger and also, games like Satoshi Dice.

Now I remember Satoshi Dice; that was about 2012 and that was when I first started looking at Bitcoin before I did my PhD, and Satoshi Dice wasn't really welcomed on Bitcoin at all; it was very hostile.  It was this basic gambling game where basically, the casino gives me a hash; I send my bet inside the Bitcoin transaction; if I win the bet, they'll return the transaction to me with my prize, otherwise I can verify why I lost.  And, that was considered spam on the network; it was not the blocks.

And, there was one really prominent core developer, who I won't name, and he's also very active in the mining area, where he basically threatened to sensor these Satoshi Dice gambling games.  So, I wanted to get your opinion on this, Vitalik; what was it like when you moved into, you know, you already picked up on a lot of this for Bitcoin and you thought, well maybe we could build a new platform for doing this on Ethereum; what was your take on that; how did this influence Ethereum?

Vitalik Buterin: Yeah, I remember Satoshi Dice quite well and it is definitely, I thought at the time, a wonderful use case of the blockchain, a clear demonstration of how if you have this decentralised platform, you could get a lot of these extra security guarantees that you would not be able to get in a more traditional line, right?  And, even if viewers to the blockchain are kind of going a little bit beyond as a payment rail where, like, as I recall, they used block hashes for one of the inputs, so the randomness; but it's currently using it for one small thing other than just a way of moving money between you and Satoshi Dice and back.

So, when I was first starting Ethereum, this was actually in the context of when coloured coins and MasterCoin and some of these later protocols that were trying to use the Bitcoin blockchain were happening, and they were taking this kind of non-currency use of the Bitcoin blockchain idea even further, right?  They were saying, let's send transactions on chain and from the point of view of the Bitcoin protocol, these transactions just look like they have some junk data.  But, from the point of view of our old protocol, from the point of view of this piece of code that we're going to send to users saying that users can download and run themselves, they can basically kind of interpret the Bitcoin blockchain.  

And so, imagine creating a language where the way you speak a language is you kind of say a sentence in English, but when you say your sentence in English, you choose synonyms so that it takes the third letter of every word and you actually get a sentence in some other language; it's a kind of hiding going on.  And, they would kind of metaphorically read the third order of every transaction, so to speak and see, are there these operations in this other protocol; and based on that language transfer, issue tokens for financial transactions involved.

To be fair, there's a lot of people saying, unport the DS and they would in it and so forth; that's not what happened, right?  It was more that there was this existing climate where there were a lot of people who were not in favour of these kinds of applications and said things like, you know, the blockchain should be used for the currency and so okay, it's better to start the platform on top of a kind of community that wants these things to happen.

So, at first I was considering the Primecoin community; this was one of the bigger altcoins back then.  And then, I was also starting to do that within the Ethereum community, and then it got much bigger than I expected and then, that's when I realised that now, hey, there's actually enough to help master this side if possible to create a separate block chain.  And so, we ended up creating a separate block chain and going from there.

Patrick McCorry: That's pretty cool.  So, just to summarise that, because that's quite a long answer; I guess there are two points to it, isn't there?  The issue of MasterCoin and coloured coin was that basically, this new protocol is added on top of Bitcoin.  Bitcoin itself was not aware this existed; it was only able to enforce the rules.  So if I wanted to, I could try and make myself MasterCoins and the Bitcoin blockchain wouldn't do anything about that.  All the validation was client side, so my software that would read all the MasterCoin transactions could happily enforce the rules and Bitcoin couldn't do that for me.  And obviously, for Ethereum, you wanted the platform to enforce the rules, not necessarily the users of the protocol or the app.

And then, the second one was the community itself.  You sort of had this idea that Bitcoin was sort of, not hostile, they just weren't welcoming of the idea.  You enter the Primecoin community, because they are more welcome to it; then you thought, well actually I'm got enough credit from MAST; let's just go for our own blockchain and see what we can build.  Cool!  Oh, go ahead, Andrew, jump in.

Andrew Poelstra: Yeah, so let me respond to a couple of things that Vitalik brought up.  So, as he said, there's sort of two things that Vitalik was discussing: one was Counterparty or coloured coins, or whatever; and the other was Satoshi Dice.

Regarding coloured coins, part of the opposition to this was not only an opposition to using the chain for too many things, but specifically using the Bitcoin blockchain, or any particular blockchain, for multiple assets.  It introduces some scary changes to the incentive model where your miner fees are always denominated in Bitcoin; they are always denominated in the primary currency.  But, there was a concern that I think actually applies to Ethereum as well, where if you have an ecosystem of many different tokens, and that ecosystem is much larger in terms of economic activity than your primary currency, then you have a situation where your chain security, which is incentivised in terms of the primary coin, is paid for by something that is different than the asset that people who are actually using the chain are using; so, there's disconnect between the usage and how you pay for it.  So, that was part of the opposition to coloured coins. 

But, there was also, and I think this is really much more what Vitalik encountered, an opposition that we saw both in Counterparty and in Satoshi Dice, and so forth; using the Bitcoin chain for things that weren't Bitcoin, because this was putting work on Bitcoin validators who simply wanted to be validating Bitcoin.  So, my recollection of Satoshi Dice, I just want to really bring up this story, is once Satoshi Dice came up, as Vitalik maybe described, the way it worked basically, if you would send some coin to the Satoshi Dice server and then if you win, they accept the coin and whatever, and maybe they'd send your coin back with some extra coins.  So, every single bet required a transaction and every win required an additional transaction.  And as Paddy mentioned, there was a certain Bitcoin developer who at the time, had a significant amount of hash power, who really did not like this and was actively trying, to the extent that he was able to, which at the time was not trivial and hopefully today, we would hope the individual miner's health has trivial ability, but that's for another thing, was trying to sensor these transactions.

The issue that I had with Satoshi Dice was that it was so needlessly inefficient because, at the time, we had the scheme called probabilistic transactions, and you can Google this, Bitcoin, Wiki, probabilistic transactions.  This idea from Mike, the Casascius Coin guy, Mike Caldwell, I think, from like 2011 or 2012 where you could do a probabilistic payment, a probabilistic payout using only a single transaction for people who win.  So, there are a couple of pieces to this and I'm not going to go into the detail, but my point is that there are maybe more efficient ways that Satoshi Dice could have been implemented that wasn't needlessly producing a tremendous amount of transactions.

And similarly, the way that Counterparty and the way that other coloured coin schemes were implemented, there were components of this that were also needlessly inefficient.  And so, there was simultaneously this viewpoint that there are two pieces to the opposition, I guess.  One was this opposition from Bitcoin people to using the Bitcoin chain for anything that wasn't Bitcoin.  But, the other was this desire to, like, really just minimise what's hitting the chain entirely.  And since I don't want to take up all the time, I'll just say you can Google my talk from scaling Bitcoin in 2017 called, "Using chains for what chains are good for", that I think we'll come back to on a later section of this discussion.  But, just really trying to not use the block, as much as you can get away with it, to do cool stuff with, actually even the blockchain, like really minimising the amount that you touch the blockchain.

Peter McCormack: This is one of the points I just want to jump in.  So, I think what you're trying to get at here, Paddy, is that essentially Bitcoin is meant to be censorship resistant, and that is whatever you think of Satoshi Dice, if the blockchain is available for you to use at your wont, nobody should be able to censor this.  But at the same time, I'm also thinking, like, Bitcoin is very early at this point.  I mean, I've read the old Bitcoin threads about this where there was seen as a lot of spam on the network.  So, what would happen do you think, Andrew, if somebody was to do something similar today; or, do you think the network is mature to the point now where it's basically not affordable because of the fees to transact?

Andrew Poelstra: It's a good question.  So, the reason that Satoshi Dice went away is because the specific mechanism they were using, which is chaining transactions off of each other, caused them just to be just easily exposed to double-spend attacks and I think they were actually attacked in a pretty dramatic way at the fold.  I think it's probably possible to do something like Satoshi Dice, maybe a bit slower where you didn't have that issue, and in which case the question is more interesting.  Suppose somebody tried to do that today, I think what they would run into is that if they tried to do something that gratuitously inefficient, they would just be priced out of the market; you wouldn't be able to run a service like Satoshi Dice.

Unfortunately, if you are a large exchange running a very high-margin Bitcoin business, you are able to do tremendously inefficient transaction patterns and if you can afford it right now, any large Bitcoin exchange, or any large cryptocurrency exchange, just has such a tremendous amount of revenue relative to network costs on Bitcoin, then it is still possible today to do gratuitously inefficient transaction patterns to execute even fairly simple protocols, just like paying out your users when they ask you to withdraw money.

So, I guess, in some sense we've seen a transition away from things like Satoshi Dice, where it's just kind of comical how inefficient they were, but we are still at a point where, as you say, Bitcoin is very early, and you can do things which, maybe in five or ten years, we are going to look back on and say were comically inefficient.  Maybe we're going to see a trend towards those kind of things being priced up, because I think as long as you can afford to be silly on the Bitcoin blockchain and just pointlessly waste space, that's something that we should be concerned about; because that ability to afford being silly is identical to the ability to afford messing with the blockchain security; the ability to afford trying to rewrite the chain, or something like that.  You really would like people to be spending as much as they can on security and caring about the cost of securing those transactions.

Peter McCormack: Okay, fair enough.

Tadge Dryja: I can just add a little about that.  I think it's this also, maybe 2012, 2013 was more of this, but even today the idea that, okay, there's things that you can do write now in Bitcoin, but we don't want to give people the idea that they can do those forever, because then when you take them away, they get really mad and do things like make Bitcoin for cash.  And then, that's what happened years ago where there were a lot of people who were working on Bitcoin and saying, "I was promised instant free transactions unlimited", and it's like, "Well wait, who promised that?", because most of the developers working from the very beginning were saying, "Okay, scalability's going to be a big issue here", and worrying about Satoshi Dice.  But, that doesn't always, you know, get through to people. 

And so, that was, I think, one of the big worries with Satoshi Dice as well as later things that, if people really get into this and like it and all your users are Satoshi Dice users, this isn't a sustainable path forward for Bitcoin, because it's not really going to scale; whereas, maybe some other things might scale better.

Patrick McCorry: Vitalik, do you have a comment before I make a comment?

Vitalik Buterin: It's interesting to kind of think about these topics in the way the approach that Ethereum took where I think it's less of a technical difference and more of a philosophical difference, where I think the community is much more on the side of, well, you know, we don't discriminate if your thing is the fees and you're all for it, which is definitely, I think in some ways, a reaction to this kind of approach of more trying to say, the chain is intended for these things and not for these other things.

And, you can see the kind of more of the Satoshi Dice-like, worse is better philosophy, I guess you can call it, like Uniswap for example; I like to give a topical example.  If you talk to financial or traditional financial people and they'd say Uniswap is crazy; where's the order book; why would a market maker even want to just create this kind of blanket xy=k card instead of being able to kind of choose their limit orders more efficiently and kind of drag them around the way they do with traditional exchanges and so forth.

But, the reality is that, if you like, in the last three years, that kind of simple and dumb won, right; and as of a couple of days ago, a simple and dumb xy=k, as Uniswap is, now has more volume than CoinBits.  So, there is something to be said for the simple and dumb thing that kind of feels ugly on paper, but at the same time has these benefits.  In Uniswap's case, I think user experience was a big one.  I think one of the predictions that I made for Uniswap, for example, was that DEXes can have higher usability than centralised exchanges, which sounds surprising, but it's possible because these centralised exchanges are on chain and so, it's a use that we just go to this other website and you just open up MetaMasks and send a transaction.  You don't have to create an account and deposit your coins and then do some other things and destroy your coins.  And Satoshi Dice, I think, appealed to people because it had that similar feel.

But, on the other hand, Uniswap does have these kinds of challenges around, well right now, transaction fees.  Now, one thing I will say is that one of the reasons I think Uniswap won is that its gas usage is vastly lower than a lot of the other DEXes that came before it.  Uniswap is in the 50,000 to 100,000, but a lot of these previous ones are around 200,000, 500,000, 700,000; back then, they didn't care much.  But it's made all of the improvements that it can make but even so, it turns out that people really like being able to exchange between assets on chain.  

So now, it's like somewhere between a sixth and a third of Ethereum in its activity; you can see the cost of it and you can also see the benefits of the philosophy.  From a security standpoint it's, you know, Ethereum transaction fees have been, I guess, averaging over a longer timeframe, like somewhere between I guess a quarter and as high as the mining world reward for the past month or so.  On some days, they have gone higher, but on the other hand, transaction fees being high means that some people have to pay those higher transaction fees; it's a balance.

Patrick McCorry: Yeah, I think there's a lot to unpick out of these three answers, so I'm going to try and pick them out a little bit so people don't get lost in the weeds.

The first is sort of the use of the blockchain.  So, the point of Satoshi Dice was because there were now all these chains of unconfirmed transactions, that was seen as a waste of the Bitcoin blockchain, because the goal there is to make it so, you know, 99% of the population can verify the blockchain in its entirety.  And, we have all these wasteful transactions in it, it just adds the time it takes to verify the blockchain.

The other part is fees themselves and why people would pay fees to do something.  So, as we've seen on Ethereum today, the fees are a bit ludicrous and crazy at the moment, but that's also because if they do transaction A and they pay, I don't know, they pay X, but that transaction's going to make them Y, then they're willing to pay from X up to Y in order to make profit from it.  Basically, as long as that transaction's profitable, then they have a financial incentive to run that transaction.

Peter McCormack: Can I ask a question there, Paddy?

Patrick McCorry: Yeah, go ahead.

Peter McCormack: So, I agree with this idea that essentially, the blockchain is a free market and as such, the prices are dictated by supply and demand.  It's more of a question for Vitalik though.  Has this massive increase in prices priced out any other particular uses of the blockchain that has you concerned?

Vitalik Buterin: It has in that we can see that there are applications that three years ago would have been on mainnet that right now are living on testnets, especially the non-financial ones like a couple of weeks ago, the thing that was popular was this Dark Forest game, right, that was basically using zero-knowledge proofs and the blockchain in this really clever way to basically implement this game that has to do around jumping around different planets and exploring space; and, if you find someone then you fight them and all these things.

And, the transaction was running on the Ropsten test network.  Now obviously, if you're running on a test network, it's much less secure than all of these things, but they did have no choice, right, because if they had run on mainnet, it probably would have cost all of the players many, many thousands of dollars and would just not have happened.

So, non-financial use cases are definitely having a really hard time now and it is definitely a concern that, well, some people might say a concern, some people might say, well blockchains are primarily financial tools all along and this is what's supposed to happen, that it's the financial use cases that if you have limited space for outbidding the less financial ones.

Peter McCormack: See, what that says to me is, it kind of reinforces what I like about Bitcoin.  It essentially tries to do one thing very, very well and very securely, which is the movement of value around the world, and it's like a tank for that.  And it also kind of reinforces, I think, why Satoshi Dice wasn't such a good idea for the blockchain, and in some ways it's kind of highlighting the same issues now for Ethereum.  So in some ways, I wonder if that changes the future.  I wonder if other people would move to other blockchains and ideas, or whether that would change, maybe narrow, what people think Ethereum will be used for, because if you've got projects which can't get off testnet, or are these things that will be solved with ETH 2.0?

Vitalik Buterin: The various Ethereum scaling technologies, I guess, combining ETH 2.0 and rollups, are definitely the thing that the ecosystem is going full speed ahead on right now.  So, I know that there are a lot of projects that are increasingly looking at the rollups.  The rollups are much shorter term than things like sharding, right, that will probably take a good deal, potentially one to two years to get out there; whereas rollups, the optimistic ones are coming in in a few months, the zk-Rollups that are in, they use fancier technology, but they're actually simpler because they only support a couple of applications instead of general purpose smart contracts like some of them are here already.

And, one of the things that I heard recently is that Loopring, this is the decentralised exchange that's arriving on top of a zk-Rollup, they are integrating a Uniswaplike automated market maker into their platform, I think starting from the next version.  So, I definitely think that people should not be rushing to conclusions until the scaling technologies are ready and when they are, we'll see.  Like, I can see one of a couple of outcomes.  One outcome is that transaction fees are kind of durably cheaper again.  The other outcome, which is both for less and more optimistic, would be one that says make it just the possibility that scaling attracts a much larger amount of usage and, because you would have a thousand times more space but a thousand times more users, we are kind of back to square one of a financial platform, but it is a financial platform for many more people than there are now.

So, it turns out that for technical or security reasons, or whatever, we will not be able to scale it even more beyond some point.  So, I see those two futures, and it's also possible that we will be able to scale to many more users and be cheaper for those users, but we'll see.

Peter McCormack: So, if that happens, then you'll be announcing ETH 3?

Vitalik Buterin: So, Eth3, I mean …

Peter McCormack: Come on; that was a joke!  Do you actually have an answer?

Vitalik Buterin: I definitely have kind of thoughts on Eth3.  This is one of those things that kind of different people on Ethereum have different opinions on.  Like, Justin Drake is kind of excited about Eth3.  I'm much more at the, ETH 2.0 is the end of history camp.  Basically, this is a kind of technical rabbit hole that is probably too rabbit holey for a Bitcoin audience, but I think there are limits and kind of complicated tradeoffs on how far you can scale a sharding system.  

Basically, the higher the number of shards that you push, the higher the minimum number of users the system needs to have in order to be safe.  And so as you scale it up more, it becomes more brittle.  And so, if ETH 2.0 with rollups on top of ETH 2.0 can't fix it, then basically nothing can, because you are just unacceptably punishing security to try to get it to push the system to its limits.  But, much more rabbit holey talk, so maybe talk about it some other time.

Peter McCormack: Yeah, I think you should get ETH 2.0 out first before getting on to Eth3!  Sorry, Paddy; sorry to interrupt you.

Patrick McCorry: I'm okay.  So, I think what we can do is we can focus on how Bitcoin and Ethereum work today for the next little bit.  So basically, we have got to the point where I really want to talk about why they're Frankenstein systems, and then we're going to talk about scalabilities, so we'll touch upon rollups, Lightning and hopefully, current ways that we can scale using the CM hardware.

So, we basically got to the Frankenstein topic just before we got into the tangent about rollup.  So, why do I call these Frankenstein systems?  Because, Satoshi Nakamoto is not a very good programmer; there are plenty of bugs inside the early Bitcoin coin core software.  Some I've taken and I've made a little list here.  

So, back in 2010, there was a bug in OP_RETURN that would allow anyone to steal anyone's Bitcoins.  Multisig has a bug.  At the start of a multisig script, there is a zero.  I mean, forget your zero then you're going to run into problems.  Several opcodes were considered dangerous and they were just disabled altogether.  When you verify a Bitcoin block, you actually create this Merkle tree, which was like, you'd hash every transaction, you'd build a little tree and the root of the tree is the commitment to all transactions.  That's not implemented cracking in Bitcoin either; there's a bug on how the last bit of the tree duplicates the transaction.  So, there's so many little quirks in Bitcoin that make that really horrible.  Ethereum equally has another list of bugs that I'll hopefully get into soon. 

One of the things I wanted to highlight was actually, going onto the waste of Bitcoin block space topic.

Peter McCormack: Can we just go back a step there, Paddy, just because you've raised a number of things like that that other people might not be aware of or heard of before?  Andrew, can you just respond to that, like are these things known and frivolous; we shouldn't care about them?

Andrew Poelstra: Yeah.  The Bitcoin user today, you don't need to worry about any of these things; these bugs have all been either fixed or somehow shimmed around.  These aren't active problems with Bitcoin, but they're historical things, as Paddy says.  It's kind of surprising that you would see a bug like that.  A couple of them have been fixed for literally ten years; I think most of them have been fixed for more than five, so it's all historical stuff that does not affect Bitcoin today and they are all well-known within Bitcoin OGE developer community.

Patrick McCorry: So, they're all known bugs.  So, one thing I wanted to bring up first was the waste of block space, and so if you consider Bitcoin versus Ethereum, this is one topic I find very interesting and I used to get the students at university, make them huddle in a bit.  So, in the Bitcoin world, every time they get a new Bitcoin, so if you send me a Bitcoin, there was this UTXO model and the idea that every time you send me a coin, there is a new entry in the ledger that my address has this coin.  So, every time I receive new coins, there is this new entry in the ledger.  And, it's a bit like a wallet; I could end up with like ten different sets of coins.

And, when I want to go to spend my coins, I create a transaction; I pick maybe two, three different of these outputs, these coins; and then I send them to the transaction; and then I basically compress them into one output and I send them out.  But the main point there is, every time I send you a coin, there is a new entry in the ledger; and when you create a transaction, you're going to have to pick several of these entries.

In the Ethereum world, it's the account-based model.  So, every time you send me a coin, it just increments my balance, as you would expect an account-based model to work.  When I spend my coins, you just check that I have enough coins in my balance, then I spend it.

Now, back in I think this is 2017, maybe 2018, Coinbase got a lot of flak, a lot of bad press, because they had 1.5 million UTXOs that I counted for, let's say, 250 Bitcoin; it was around that region.  But, because they had so many ledger entries, these outputs, it was not economically viable to spend a Bitcoin, because every output was like $5 or $3 or something, but they add up to 200 Bitcoin.  

To get around this, what Coinbase have to do is they have to wait until the fees on the network go really low and then they create transactions that basically batch all of these entries into one.  And, all they're doing is managing their coins, and they have to create Bitcoin transactions and send that to the network.  Now, my question would be, is it Coinbase's problem; should it be Coinbase's problem; should they have to manage their UTXO; do they need to do this batching; is this not just a waste of the Bitcoin blockchain because of a problem in the protocol?  And in the account-based model doesn't exist, because you just keep topping up the same account.  So, I've seen Vitalik unmute himself, so I'll let Vitalik go first; see what he thinks.

Vitalik Buterin: Yeah.  I'm sure that I can imagine you will be able to kind of describe the justifications for the UTXO approach even better than I can.  But as Paddy mentioned, there's stuff that we had frustrations with it.  I even remember experiencing this myself back in 2014 when we were doing the Ethereum end of ether sale, and we had this one address that gathered together, like 9,003 inputs, or something like this, and these were all in a multisig cold wallet.  

And so, what we had to do was basically just generate script and come up with, in a sense, this huge number of transactions on chain that had gathered up all of those transactions and kind of combined them together into a hot wallet, and then the software we were using had limits on how much you could kind of batch together at any time.  And, we started off by kind of grabbing the big outputs that we received; and when we got to the small ones, there was a whole bunch of dust.  And eventually, when there was not too much money left, I took all four private keys off all four different machines, exported them, put them all onto my laptop and then just wrote a script to generate the rest.

So, it was definitely a time-consuming and not a user-friendly process, and this was not the only time.  There was also a time in 2013; this was when I was writing basically Bitcoin wallet software, and started to run into some similar issues.  And, one of the issues was, I think, that if you want to, let's say, send a transaction that has a coin, then you have to figure out what the fee is going to be; but then, that fee adds a bit to how much you have to pay; and then, what if the amount that you have to pay requires an added extra output; and then, the extra output increases the size of the transaction which increases the fee in a bit of a recursive loop.

So, there were these kinds of things that I think you can say that they contributed to motivating Ethereum's kind of account-based approach.  Now of course at the time, I didn't really understand well what the technical benefits of the UTXO-based approach are, and now we have more of a handle on those arguments and either here or at some point, we can talk about some of things we are doing to try to move towards getting the benefits of both approaches at the same time, but it's definitely complicated on a lot of trade-offs.

Patrick McCorry: Go ahead, Tadge.  Oh, sorry, do you want to come in, Andrew?

Andrew Poelstra: Yeah, I can counter with my own user-experience story about the account model.  So, Vitalik is correct as a wallet developer.  So, users don't need to see this, but a wallet developer needs to think about this.  There are difficult computational problems related to doing this optimally, and it gets complicated and it's difficult to manage; and, I can comment a bit later on some benefits of the technical design.  But from a user point of view, it can be frustrating to manage UTXOs, especially when you find that some of them are very small and their value is no longer sufficient to pay for their own spending.  

But, the flipside of this though is that the UTXO model lets you, as a user, when you are receiving funds, to generate a unique invoice number, or address I guess is what we call it for Satoshi reasons; a unique address for every single payment.  So, when you receive coins, you have an easy way to identify when you receive money related to a specific payment across to a specific invoice or a shipment, or whatever; a user, or what have you.

In the account model, or in the way that Ethereum has implemented that, you as a business, if you try to have only one account that is receiving coins from all these different users, it becomes difficult to identify which specific payments come from which users.  You can look at the transaction and look at the spending account and try to identify users that way, but that makes it hard for users to use multiple logs and makes it hard for users to spend coins from different smart contracts.  In general, it restricts the way that users are able to use the system.

And then, if you instead try to receive coins, if you try to attach an identifier using a smart contract or something like that, then you find yourself having to write solidity, or having to write some sort of contract, just to receive coins, which I have talked to a few wallet developers who get very angsty about that; they get very scared and they wish that they could to something simple.

Another approach you might try is to say, well I am going to have a unique account for every single user; I'm going to receive coins to that account; and then I'm going to have my own code, which forwards money from the temporary account to my real account; and I'm going to have all my money in the same account.  And, the reason that you need to do this sort of consolidation stuff is that when you're spending coins in an account model, you can't spend from multiple accounts at once.  If you were doing that, then you'd basically be in the UTXO model, right?  

So, because you're always spending from a single account, you need all your funds to come in to a single account, and then that makes it more difficult to distinguish between different payments; they might be coming in in different orders from different PoWs across the network, and so forth.  And as a wallet developer, I'm kind of curious as to the standard solution to this from the Ethereum world; and maybe I just don't understand it.

Tadge Dryja: I can comment a little bit as well, actually, as to this problem.  So, the point there is that how do you identify the person sending you money; that's to be the basic problem here.  And, what you do in Bitcoin is that you give the person paying you money a new Bitcoin address; and then you get money to that address, you can say, "Well, Bob paid me money".

In the Ethereum world, people always tend to reuse the same address.  But now, if I've got ten payments to the same address I'm like, "Well, did Bob pay me?  I don't know".  And, I don't think there is a payment protocol in Ethereum, because someone pinged me about this issue, but what you could do is have a payment protocol where maybe the sender pings the receiver to say, "Here's a transaction I paid you and I can prove that I own this account".  But, that doesn't really seem to exist.  And, the transaction does have a memo field; it would be really nice without a memo field, because then you could just put information in the memo alongside the transaction.

Vitalik Buterin: Well, it does, right?  Ethereum transactions do have the Tx data field.

Tadge Dryja: Do they?  Oh, they have a data field, I guess, but I guess you could just put random data in there?

Vitalik Buterin: You can put as much data as you want, as long as you're willing to pay the 16 gas per byte for it, which for a memo is pretty tiny.  Yeah, and I think in the Ethereum world, there is a couple of different solutions for this, like the one that I think a lot of people use, just because it kind of maps the most closely onto their existing practices in other blockchains, is just like Andrew's solution of, you have a different address for each user, and then run your own codes to consolidate; that sort of thing, exchanges and output, just doing.  And, it might be a kind of odd map, the most convenient for them, because it wants to have the same workflow across all the different chains.

The second approach would be to not have a contract on chain, but to ask senders to include transaction data that you would then have code that goes into the transactions and checks what their data is.  And the third would be, and I guess it's philosophically the most correct approach, but the Ethereum ecosystem hasn't done enough to make it easy to do this, is to write a contract; and, that contract can have a function and that function would take an argument, and the function could even do something like respond back by giving you some ERC-721 or some other token.  But basically, it specifies that you pay for the thing and so it would be a payment and a kind of ticketing system at the same time potentially.  So, it depends on your use case.

Patrick McCorry: I think what might maybe come out of this is, if anyone is listening that could write an EIP standard for putting the data inside the data field, that would be nice. 

Vitalik Buterin: Yeah.  This is definitely one of those things that comes out of Ethereum having less of a kind of payment-centric culture than a lot of the payment-focussed blockchains that notes there's relatively less people that use it for plain old consumer merchant payments.

Peter McCormack: I mean, I can have a go at building that if you want?

Tadge Dryja: Am I still working?

Patrick McCorry: Yeah; do you want to make a comment, Tadge?

Tadge Dryja: I don't know if I'm lagging or something.  I would just say, with the Coinbase thing specifically, what was interesting, I'm pretty sure Coinbase does not do this anymore, but I know that years ago, they sort of were acting like Bitcoin was an account model, where each person had a sort of withdrawal address, where whatever address you gave Coinbase that you wanted to withdraw to, they would first send it to "your" address, which Bitcoin controlled the private use for, and from there sent it out to you.  So, that was a big aspect.  They had sort of a weird custom software that was trying to treat the Bitcoin UTXO model more like an account model.

And, I definitely agree that the account model is much more intuitive, but I really like the UTXO model having worked more with it, because it's much more purposebuilt.  There's definitely things you can't do with it, but for just what Bitcoin tries to do, it's a really nice efficiency gain.

Andrew Poelstra: I have one more quick thing I'd like to throw in, which is that Vitalik mentioned the difficulty, annoyance or cost of having to consolidate coins in the UTXO model, where if you receive a bunch of small payments and then the ambient network fee level goes up a certain amount, then you'll find that the payments received are no longer usable to you because they're in UTXOs, whose value is less than the network fee required to spend them.

He mentioned how to avoid this; it seems to be you can buy multiple small UTXOs at times when the network fees are lower.  And I think as a user hearing that, that's maybe a scary thing, right; it's something you've never even heard of and then suddenly your funds become inaccessible, and there's nothing you can do about it.  And, I should maybe mention that that's not really the case from a user perspective. 

Right now in Bitcoin, the minimum fee required to get into a block goes to zero, or whatever the denial-of-service lower limit is, every Sunday afternoon.  For reasons of inefficiency, network fees tend to go up by quite a bit during bankers' hours during the week; they go down every evening; and every evening in New York, it's all New York-centric for who knows what reason; and then they go down quite a bit on Saturday and even more on Sunday.

And so for network security reasons, I hope that that situation changes, because it's very bad that nobody's paying to secure the network on Sundays, because obviously attackers now don't have to take incomes.  But, what that means is right now as a developer, you can have code that just always does consolidation on Sunday afternoons.  And in the future, when hopefully that situation goes away, the whole ecosystem will be much more mature, hopefully to a point where these kinds of conversations will be like esoteric, technical things that nobody has thought about in years, because there's an off-the-shelf solution to all of them.  I just wanted to throw that in there.

Patrick McCorry: Cool.  I can sort of move on the topic a bit so we don't keep talking about pricing.  One thing I did want to bring up is basically the Bitcoin scripting language itself and what it's capable of; I think that's quite interesting.  I was going to make a joke about how transactions are signed in Bitcoin, but I won't make that one now; I'll just move on to save time.

So basically, Bitcoin script is sort of like this dark art that no one really understands; you have to be a special wizard to write Bitcoin script.  I remember trying to do it once or twice myself and I really hated the experience.  And, I mean, potentially thousands of Bitcoins have been lost in the early days, pre-2014, just because of really bad scripts that people wrote.  Most of these Bitcoin scripts and most transactions seem to rely on three basic primitives: this coin can be spent if one or more parties assign the transaction, a singlesig or a multisig; this coin can be spent after time T; and, coins can only be spent if a secret is revealed before time T.  They seem to be the basic primitives that you get in a Bitcoin script.

So, this is really a question for Tadge.  So ultimately, Tadge, you were involved in designing the Lightning panel, the Lightning network, back in 2014, 2015 now, I guess, and these were the basic three ingredients you had to build the Lightning protocol.  How did you find that?  I normally find designing stuff in Bitcoin, punching it into submission to get it to do what I want; I wonder what your experience was like doing that?

Tadge Dryja: It was difficult; it wasn't even the script.  At least, as of 2014 you couldn't really do the Lightning script because we didn't have object block time verify, or check sequence verify.  So, the time-locks were only transaction-based initially.  There were previous constructions that were much more limited, because your channels have a fixed derivation, or you could only appear a certain number of times, so part of it was looking at given this system, how can we put it a very minimal change to allow the Lightning network channels?

The script itself, once you have that, I don't think it was that bad.  Compared to everything else, working on Lightning, you'd probably spend a day or two tweaking the script and the opcodes; but, compared to everything else, that was not really the issue at all.  So, writing the script is hard, but so many other things are much more involved than actually getting the opcodes and being able to use them.

Patrick McCorry: That sort of leads me to a question I also want to ask Andrew.  So, obviously the Bitcoin script is a very tiny, tiny part of the overall protocol, but obviously the script, whatever you can encode on the blockchain will influence the wider protocol that you've built on top.  So, what do you think of the experience?  Maybe Andrew wants to answer this first, because I know he has some opinions on this based on what he's been building recently, building protocols on top of Bitcoin scripts; so, you've got the script and it works, but now you need to do the multi-party work on top of it; how have you guys found that?

Andrew Poelstra: It's funny that Tadge says, "Oh, well the script is just such a small part; the protocol I was using was much harder".  So, sure, if you're somebody building the Lightning network, then maybe scripts are a small part of your complexity; but, I would hope that the ordinary user just trying to manage their coins wouldn't be dreaming of mentioning that Lightning networks scale complexity.

In that case, I think for ordinary wallet developers, script is actually very difficult to use.  And, it's funny to hear you characterise what script can do, in terms of signatures and time-locks and hash preimages, because that way of thinking about script is actually fairly recent.  That comes out of a project that I was doing with Peter Wuille and Sanket Kanjalkar starting in 2018 called "Miniscript", which was a way to find some sort of script that was useful to do a bunch of different things, but also had a comprehensively user model because, in my experience, directly trying to use the script, it was pretty similar to Paddy's.  

I've probably dug into the Bitcoin script interpreter more than most people in the world and even having studied it for several years, there are really a lot of weird sharp edges.  It seems to me like it didn't have a clear purpose when it was designed.  It wasn't like, "Oh, we should think in terms of signatures, hash preimages and so forth.  It is sort of a collection of different operators that seem to be stolen from Forth, which is some old-school, 1960s programming language that is basically like Assembly, but crumpled up in a dumb way; you can quote me on that!  It had a bunch of weird opcodes that were copied from unrelated programming language called Forth; it had a bunch of arithmetic things in it that, as I think Paddy mentioned, were eventually disabled because the implementation had security issues.  A lot of them would run into undefined behaviour in Seer, they would allocate unboundedly much memory and in certrain circumstances, there were various awful things they would all do.

The language paradigm is basically similar to UVM; it's a bunch of opcodes all in a row.  You do this, you do that, you do this, you do that, and you're upgrading on bits and bytes.  You're not thinking in terms of signatures, you're not thinking in terms of time and, as Tadge mentioned, in the original incarnation until very late in the game, until 2014/2015, we didn't even have opcodes like these times.  You were just thinking in terms of bits and bytes which, as a user, is really not how you are thinking about things.

And so you find, as a person trying to use script, you run into this double whammy of first of all, script itself is full of dark corners and weird sharp edges; and secondly, you have to translate your mental model of what you're trying to implement.  Maybe you want a two- or three-multisignature with a time-lock emergency clause, or something like this, and to translate that mental model into script itself.  

And, the way Ethereum approaches that second issue of translating a mental model is in Ethereum, you have these languages; you have Solidity and you have a Serpent, and maybe there are others now, but I think solidity is sort of what everybody uses. And this is a language that looks similar to other more modern programming languages, and there are compilers that will translate that into UVM, which is similar to Bitcoin script, but has maybe clearer use cases in mind.

So, yes, script is very rough.  The origin of Miniscript, which I mentioned and I will try to be brief so we can move on, was trying to come up with something that would let users use Bitcoin script, in the same way users can go ahead and use Solidity.  And, we had two limiting factors in designing this.  One was that we had written this Bitcoin script that simply can't do a lot of the stuff that UVM could do, so that was both tying our hands and also made our job a little bit easier.  And, the second thing is, we were a bit scared of all of the issues in Solidity, both related to users working with a fully general programme language, where maybe they would run into surprising interactions, especially related to reentrancy and figuring out costs for various things; and also, there have been issues in Solidity relating to the compiler itself doing surprising things, or buggy things.  

And, what we wanted was a way that users were almost basically just using Bitcoin script itself so the way that you compare with on the blockchain to what the user-readable version is, we wanted that to be as thin a layer as it possibly could.  So, we came up with this scheme called Miniscript.  Miniscript is basically, you build this tree, this graph, however you want to call it, of signature checks, of time-locks, of hash checks.  You put these into a tree where all your nodes are ands and ors, and five or seven or these, and two or three of these, and so forth.  And then, we'd have a way to serialise that tree directly into Bitcoin script opcodes and you can also de-serialise them and back.

So, the way that we think about script today is usually in these terms, because when you're using Miniscript, you have a nice, pretty picture; you've got this tree, you can draw all your script and all your conditions.  And, if you're using script, hopefully you can de-serialise the script in the Miniscript, because otherwise you're in the dark forest of mysterious script behaviour.  And, that situation hasn't improved since ever.  The Bitcoin script itself is still quite difficult to work with directly and quite difficult to read about directly.

Tadge Dryja: Opcode separator; that's all I have to say.

Patrick McCorry: Can I just get a summary of that?  So, Miniscript is the idea that instead of actually writing Bitcoin opcode and dealing with the script itself, the idea is more that you outline the technical strings that you care about, like maybe you want a multisig here where you want Alice and Bob to sign it; or, maybe there's another condition where you say, Alice can then spend his coin after time T; and, as a programmer, you write these constraints, ie don't really deal with the opcodes, and then everything just gets nicely packed up into a tree.  Then you reveal one of the conditions and then spend the coin based on that condition.  Is that a good way to summarise it?

Andrew Poelstra: I think so, yeah.  And what's cool is, I'm pretty sure you can do the same thing; I think you can take Miniscript and then serialise the DVM as well.  You have to come up with a serialisation, but I don't think that will be difficult to do; I think you could get an intern to spend a couple of afternoons on that.  So, its model is not just limited to Bitcoin script, but we wound up coming into it both based on the limits of Bitcoin script, and also the Bitcoin ethos of really trying to minimise the number of layers between what's queued on a blockchain and what's in a user's head.

Patrick McCorry: Vitalik, did you want to comment on this?  I've seen you "unmute yourself" a few times.

Vitalik Buterin: Yeah, there's definitely a lot of things that got covered there and I think in general, making a scripting language is hard, and so I definitely see why a lot of the opcodes got disabled very early on.  One of the classes of bugs that takes quite a bit of thinking to weed out in the scripting language is what I call "quadratic execution bugs".  So, this is basically where, if you have N worth of space in your script or in your execution tab, then you can create N things in each of those N things that kind of take N amount of work to do.  And so, N multiplied by N itself, I think it's twice as big, it takes four times longer to execute.  

Maybe Tadge or Andrew could correct me on this, but I remember Bitcoin having, and perhaps still having, a quadratic execution issue where basically, you generate a big transaction that has a whole bunch of different signatures and for each signature, you have to keep queued a separate hash by basically taking every input except for that input.  And so, you have these N hashes at N size and so it's technically N squared.  But, because Bitcoin has smaller block sizes, it only takes something like 30 seconds to verify, I would have said.

Patrick McCorry: You just took my, Vitalik, I thought I'd skipped over!

Vitalik Buterin: But, Ethereum basically historically, it's had lots of similar issues, and there was this time in 2016 when we had this attacker.  Probably on that was quite helpful to the ecosystem, just systematically I went through every one of the issues in the protocol and just attached them on mainnet and ended up almost shutting mainnet down.  I mean, not quite shutting mainnet down like I think the Augur ICO happened doing that time period, but making it really hard to use for those 35 days.  And, we eventually had to hard fork to get around it.

So, it was tough and there's definitely a lot of these security nuances and you have to really expicitly think about gas costs; and Bitcoin now has its equivalent of gas costs, because I think, not like there's the block size and witness size limits; I think you have sigops limitations and some other things.  So, there's tough things that you have to think about, though the trade-off is that if you create a scripting language that's richer than the various higher layer of simplicity that you can get.

So, one of the benefits, I think, that you get on the Ethereum side is what we call "rich statefulness", this ability to have these objects where objects have persistent addresses and those objects can be modified and keep the same address.  That's something that I've definitely heard even the state channel developers appreciating and something where, if you want to do something more complex, like one of these wallets where you can start a transaction and that transaction has a 24-hour waiting period, within the 24-hour waiting period you can cancel it, and all of that.

There are ways to do it if you add other opcodes in this model where basically, every object that you have is a one-time use object, but there are benefits where the mental model that you have of your wallet, and your wallet just stores things half directly to a smart contract. 

Patrick McCorry: You've sort of picked up my next point as well, Vitalik, so let me just summarise all of this, just before we jump in to anything.  So, Bitcoin has this really weird signature bug, which was going to be one of my earlier jokes, where based on the number of coins you're spending, you have to do quadratic operations to verify that transaction.  Now, in 2016, Bitcoin actually had a spam attack where someone was basically trading transactions that were fanning-out and creating thousands of coins that were all dust.  And eventually, the Bitcoin miners went and fixed this; they reduced the step load by creating megablocks to spend those coins.  But, those megablocks took ten minutes or more to verify, I don't actually remember the exact amount of time, just because that one-megabyte transaction was huge; and because of this quadratic bug, the block just took forever to verify.

One of the big reasons for SegWit was not really for the block size increase; it was to fix that bug and to fix transaction ability.  So now, in SegWit, we no longer have that weird quadratic bug anymore, which is really nice.

Vitalik Buterin: But, you have a way to make big transactions without the bug, but the old type of transaction can still be made, right?

Patrick McCorry: Oh, yeah, they're still there.  The bug still exists; it's not gone, it's just polished over.  Okay, this gets on to my next point though which is to do with Bitcoin Vaults.  But before I get to Bitcoin Vaults, there's this ongoing joke in Bitcoin that you can do everything with multisig, you know, multisig is the ultimate smart contract; you don't need smart contracts at all.  How true is that, Tadge, Andrew; do one of you guys want to pick that up?

Tadge Dryja: I guess I don't know to what extent people are aware of it.  Discrete log contract is in some of the stuff Andrew's worked on with scriptless scripts, so he could talk about that.  It's not exactly just multisig, but you can do quite a lot with just signatures and time-locks.  So in discrete log contracts, you can have certain types of futures contracts or forward contracts with an oracle that gives you a price feed and then, based on the oracle's price feed, someone wins money, someone loses money in sort of a Lightning channel-like structure.

There is quite a bit you can do with Bitcoin scripting.  Obviously, it's much more challenging than Ethereum scripting, which is sort of built to do whatever you want, but it's kind of cool to be able to do these things, and potentially much less traceable.  I think if everything looks like a signature and there's something you can get to fit into a signature, it's really nice.

Patrick McCorry: Andrew, did you want to comment on it?

Andrew Poelstra: Yeah, sure.  Going with the model we have of scripts with signatures and half-locks and time locks, so Tadge mentioned some things that don't actually fit those models that you can do with discrete log contracts, where the result is just a single signature.  I have a toy, well not a toy but hopefully a real system, called "scriptless scripts", which lets you hash preimages using only signatures; and then having multiple signatures and stuff that we've known for quite a while, at least in theory, how to do multisignatures and threshold signatures and stuff.  Although certainly in practice, things get quite difficult when you try to actually implement that in an adversarial setting.

The one big piece missing is that there's not any way I would say is reasonable to do something resembling time-locks using only signatures.  So, you find that when you're trying to do ultimately cool things with just signatures, you find actually you usually need a time-lock backup condition.  You want it so that if too many of your parties just drop out and disappear, or they try to create the protocol or whatever, that after a certain amount of time, everyone else can take all their coins back.  You need that in pretty much any protocol.

And, right now, the way that we know how to use signatures is basically, you can't do it with just signatures; you need some sort of scripting ability.  So, in SegWit, B1 or Taproot, part of our design goal was that in the cooperative case, in the happy case for any protocol you design, you can get away with using just signatures.  So, we made sure that the happy case for Taproot stem looks like just a public key and some signatures, and then Taproot lets you commit to a script, if you need a script.  And, the reason you would need a script in practice was typically to house within a time-locked backup.  So, if things go wrong, then you say, "Haha, that public key that I told you; that was secretly a commitment to some other script", and then here's the script and here we're going to execute it; the time-locks aren't pure, some are signatures, whatever.

But, going back to very early in this call when the telex said, well this is better, I think the scriptless script stuff is maybe like a real example of where worse is better and might wind up killing us.  And I hate to say that, but it's very difficult to work with scriptless scripts.  I know there are a few people out there, including Tadge I think, but also Pedro Moreno-Sanchez - who you should get on the show, Peter; that would be cool to get Pedro on here - who are working on trying to build a coherent protocol framework using scriptless scripts.  

But, the current situation is basically, when I was talking about how awful Bitcoin script is to use, well imagine if every single script you wrote, you actually had to write a publishable, academic, cryptography paper and with mathematical proof and get that through peer review?  That's so difficult.  And not even simple papers; you can't even ask a grad student to do it in a couple of weeks; half of these are really complicated, like multi-party protocols; really complicated security models that are difficult to even describe.

So, you can do anything you want with just signatures, but the complexity pull-up is unfortunately really big, and that trade-off may be reasonable for something like the Lightning network where the script component is a small piece of the whole puzzle and the script component never changes, so you'd have one script that you'd use for every single channel.

Patrick McCorry: So, I can try to summarise what you're basically saying as well.  So, the point of a scriptless script is that so far, it does the condition; coins can only be spent if the secret is revealed, the secret being the discrete log in this case; and, the idea there is that we want this condition to be indistinguishable from a normal signature.  You'd see a signature in the block chain and have no idea there was a script even involved.  Taproot's also basically trying to achieve the same thing.  The idea in Taproot is that you can hide most of the scripts on the blockchain, and you only reveal the script that you're going to spend later on.  Is that a good way to summarise it?

Andrew Poelstra: Yeah, I think so.  And it ties in, I guess, to what I said very early in the call, about trying to just minimise what you put on the chain.  The end result is there's a key and there's a signature, and that's all that hits the chain and the whole protocol, all the scripts and stuff are kind of hidden behind the scenes, and it's this more complicated protocol that's only amongst the actual counterparties and not amongst the blockchain.

Tadge Dryja: I would also say that it's sort of hard mode to develop the scripts and signatures this way, but if you get it to work, it's usable in other systems that have much more powerful scripting.  So, you could do these kinds of things in Ethereum, because Ethereum certainly supports checking signatures.  And, you also maybe get some of the benefits of it not being obvious what people are doing.  So, it's really important it works either way, although for many things it gets too complicated to make.  You're not going to be able to have ERC-20 tokens with scriptless scripts.

Patrick McCorry: Yeah, okay.  I just want to move on to a bit of Ethereum now, so we can speed up a little bit.  So, one thing I wanted to highlight in the Bitcoin world is, one really cool feature that would be nice is a Bitcoin Vault; the idea being there is that, when I receive a coin on Bitcoin, it gets locked into my vault automatically.  And then, if someone tries to spend or steal my Bitcoin, there's this pending window of 24 hours where I could see someone trying to steal my Bitcoin and I could reverse the transaction.  That's the basic idea of a Bitcoin Vault.

But, so far today, to achieve that in Bitcoin is very difficult, because they don't support this technique called Covenance.  So, the way to achieve it in Bitcoin today is, you send it to my address; my signing key is online; I have the pre-signer transaction; I keep that in the local storage; and then delete my signing key.  And, that's not really ideal for a secure setup if you want Bitcoin Vaults, because all your signing keys are online for a limited period of time.

Now, in the Ethereum world, this is way easier to implement.  I could probably write a smart contract within the next hour or two that would hopefully not be broken and would be able to do that.  But hopefully, "will not be broken" is the important bit, because Ethereum is very expressive and has a lot of very quirky bugs.  Good examples are the DAO that led to that hard fork where, Vitalik, was it 20% of coins that were locked up in the DAO?

Vitalik Buterin: It was 12%, I think.

Patrick McCorry: Okay, 12%.  So, you know, that was written by the Ethereum developers back in 2016 and there was an unforeseen bug that no one except Andrew Miller was able to predict.  He has an "I told you so" slide on one of his slides about this.  And then, later on, we had these wallet contracts by the Parity Wallet, and that was hacked twice.  The first time, I actually forget the details of the hack, but that was a smart contract bug.  The second bug for the Parity Wallet was the wallet architecture, and it just wasn't deployed correctly.  So, this guy called "devops199" came along; he set himself as the owner; he did self-destruct; and he destroyed everyone's Ethereum and locked it up.

So, getting these smart contracts right is actually a tricky job in its own right because it's so expressive.  And, one thing I wanted to bring up are actually, I guess, security audits, because they're very topical at the moment.  I wanted to get a contract audited recently and it's going to cost me $17,000 for a week, and I can't afford that so I didn't get it audited.  But, what we're starting to see now is that a lot of these yield farmers popping up; there's a new one called "Burger" today, there was one called "Sushi" last week, and the odd test in production, where they release a contract, there's always bugs in these contracts, but then the community goes and audits it afterwards.

Vitalik, what's your take on that current trend?  And, you've also done some work with Vyper, a language that's trying to get rid of some of the common bugs; so, what do you think about both of those topics?

Vitalik Buterin: I think writing smart contracts safely is definitely a challenge, and it's definitely not as easy as kind of just writing a programme and you're done, especially if you want to end up holding significant amounts of money.  I definitely also think that the security situation has increased or improved quite considerably since 2016 and 2017.  If you compare the make, ADEL, and the security process that they went through, compared to the original ADEL, there was much more work that was gone in to auditing it, and the thing has millions of ETH and so far, it's been fine; surprisingly fine, actually.  And Uniswap isn't that complicated; it's definitely much less than ADEL, it's just a single contract acknowledging lines of code, and it's been fine.  So, that's as far as wallets go, you know, so safe wallet is the standard that everyone uses, and that's, I think, gone through formal verification. 

So, it does seem like the contracts writing environment is considerably safer than it was a few years ago.  But at the same time, there's definitely this situation where the capabilities of a system just kind of crash against the realities of human impatience, and now there's a lot of people who if you say, "Instead of safely creating a contract system in a year, you can safely create it in a month", they're going to hear, "Instead of unsafely creating one in a month, I can unsafely create one in three days", and that's what they do.

So, I definitely expect that the yield farming thing, ecosystem, is going to get a reckoning of some kind at some point; like, I utterly expect there to be one of these bugs of some kind that leads to money either getting stuck or getting stolen in any significant quantities at some point.  And, I'm definitely hoping that when that happens, it will happen in a way that's going to scare people without actually being harmful to a really large extent.  Yeah, I mean it's a challenge.  The things that people are doing right now with all of these kinds of yield and lock-up contracts, there's definitely things about them that scare me.  Even the thing that scares me is just actively encouraging people to deposit high amounts of value to untested things, like the old farming is almost a perfect storm of risk from that sense.

Peter McCormack: Vitalik, can I ask you something here?  The world of exit scams is something people don't particularly like, but could somebody build one of these yield farming projects and write their own, let's say, their own exit scam into the smart contract without anybody knowing and being able to stead funds from it?

Vitalik Buterin: Right, so what would happen is first of all, when you write a contract thing, there is, I think, a community expectation that you have to publish the source code on Etherscan and if you don't, then a lot of people will yell about that.  So then, you have the Solidity on Etherscan and you're going to see a bunch of people checking in and trying to audit it.  Ethereum definitely has this self-community of kind of volunteers, well not quite volunteers because they end up getting tens of thousands of dollars on Bitcoin, because I know the community loves to work so much; but, it's kind of auditors, and there's definitely people that are going to go through and try to check all of these systems and basically just gives them quick, emergency audits while they are online.

Peter McCormack: Who audits the auditors?

Vitalik Buterin: I guess presumably, the auditors implicitly audit each other and if someone misses something, then their word isn't taken for as much in the future.

Peter McCormack: And, can you have multiple people audit the same script?

Vitalik Buterin: Oh, totally; that's what usually happens.

Peter McCormack: Okay.  Next up, I talk to Andrew, Tadge and Vitalik and Patrick more about the technical differences between Bitcoin and Neath, but before that I have a message from my amazing sponsors.

Patrick McCorry: One thing I also wanted to bring up was to do with one of the big features of Bitcoin; multisig.  Everyone loves multisig; multisig is the saving grace of Bitcoin.  In Ethereum, multisig isn't as popular, from what I'm aware, and that is mostly because you need a smart contract wallet to implement multisig and there's protocol; different ways to do that; do you know if it's safe, etc, to have their own multisig set up.  Do you feel like because of what happened with Parity Wallet and the fact that they got hacked twice and they lost so much money, has this impacted people's willingness to use wallet contracts; say, for example, would you lock up money in a wallet contract today and actually use a nice multisig set up?

Vitalik Buterin: So, the Ethereum foundation still has the bulk of its coins in the same wallet that it had for five years and it definitely has not made the choice to move to a singlesig.  So, I definitely think that it would be wrong to take the politics of Parity Wallet experience and transplant data to multisig wallets in general.  As I mentioned, I can also say it has had a lot of verifications and attention on it.  But, it definitely is true that the Parity situation did spook people off of multisig for some time.  But at the same time the smart contract security issues are not the only reason why multisig is having a hard time getting Ethereum as an option.  There is lot of these subtler issues and this is maybe one of the Ethereum equivalents to the stupid stuff in Bitcoin, like the zero at the start of a multisig that you mentioned. 

Basically, the problem that Ethereum has is that it tries hard to be this general purpose and abstract thing.  But the one specific part of the system that is basically what kind of accounts should be a top-level account, so what kind of accounts can basically initiate a transaction and be directly paying for transaction fees.  There's only one type of thing that's allowed which is just the single signatures to the ECDSA account; what we call ELAs externally on the accounts.  We have like some EIP as in I've been heavily involved in pushing them; account abstraction is the name for this, that allow you to have smart contracts, to be at the top with smart contracts, to be paying for transactions.  

But, until we have that, the problem is basically that if you have a smart contract wallet then you would actually need to have two addresses; one address is a smart contract fee, the other address is a single signature address, where the single signature address holds the Ether you use to pay for transaction fees.  So, your transaction initiates in a singlesig and then it calls into the multisig, and then the multisig verifies the other signatures, and then it forwards the call on.  And if you don't want to do that, then you could use one of these Layer 2 gas market things where like someone else acts as the realtor for you, but then you have to pay for an entire extra transaction.

So, there are these annoying complex cities that we definitely didn't think through well enough at the time when we were creating a protocol, and now we're finally starting to take moves to improve the situation.  But that definitely, I think, has been not completely preventing, but definitely hampering, adoption of multisigs and some of the other smart things like social recovery, for example, that I advocate for them actually being used for.

Patrick McCorry: I can definitely agree that the fact that the person who's authorising the command and who's paying for the transaction is the biggest pain in the ass in Ethereum that I've had in the past year.  The building of third-party relayer called any.sender, that's the biggest issue we run into at the moment and it's driving me crazy.  So, we had to tell everyone on these wallet contracts.  

I only have one more question and then we'll move onto the future of scalability, because I think that will be quite interesting.  This is really just to do with the birth of Ethereum and now in hindsight.  So, I went on the original Ethereum.org website to see what was promised, because actually I didn't get into Ethereum until about 2016.  I was there in 2014, I just wasn't paying attention.  So, what it says on the website is, "When Ethereum was launched, it was a called a decentralised and scalable world computer".  That's me reading it sorry.  

What it said on the website was, "Ethereum can be used to codify, decentralise, secure and trade just about anything: voting, domain names, financial exchanges, crowd funding, company governance, contracts and agreements of most kinds; intellectual property and even smart property thanks to hardware integration."  I took that from the Ethereum.org website from the web archives, so hopefully there's some integrity to what that statement was, and hasn't been changed in hindsight.  

Do you think anything based on that and how Ethereum was advertised back in 2014, was anything misleading?  Do you think it's held that goal?  In hindsight, would you change anything about that?

Vitalik Buterin: I think the world computer metaphor is one of those things that was badly chosen to some extent.  The intention of the world computer metaphor would be that like this is a special kind of computer that can be accessed by and used by anyone in the world.  It is a special purpose to all and transactions audits are going to be expensive and you can only do a little bit of computation on it.  But, people took it to mean a computer powerful enough to meet the world's needs of computing, which it obviously isn't.  Even with hyperscale rollups, it's never going to come within orders of magnitude like being able to run the entire world's AWS stuff, right.

So, as far as that list of applications, people are using it for various financial applications.  People definitely have made project uses for smart property; people are using it for domain names.  In terms of scalability, I think we've been fairly clear since the beginning that scalability is going to depend on future things like either sharding or Layer 2 protocols.  Those are things that we have been really actively talking about in pretty much any Ethereum discussions since even 2014.

The thing that I think you can say fairly is that we did underestimate the amount of time until those technologies would be ready.  But in terms of just like fundamental feasibility we've definitely come quite far and now, these earlier phases of ETH 2.0 are in testnets and rollups, which actually would provide big scalability for a lot of things, actually are months away from production.  I think timing was probably the main issue where expectations in reality ended up being relatively disjoint from each other.

Patrick McCorry: Peter, did you want to say anything before I move on?

Peter McCormack: No, we covered world computer and changing narratives.  I know some Bitcoin people, that really frustrates them.  I don't use Ethereum but I understand that things pivot, things change in technology, that's just the way it; that's never been something that's fussed me.

Patrick McCorry: Okay, so I guess we'll move onto scalability now and the future of both of the networks.  There's really two ways; I have tried to categorise scalability in two ways to keep it very simple.  The first way, given a set of hardware that we have, can we increase the throughput of the network; ie if it takes me 20 hours to verify the blockchain, can we reduce that to ten hours?  That's constant scalability.  Can we just simply improve the software to be more efficient?

The other one is can we just reduce the computational load?  Do we have to sign every transaction to the network; does the network need to process its entire smart contract; what does the network really have to do in order to keep everything secure?  So, I'm going to touch upon the first category briefly and then we'll dive into the second one a bit more.  

Peter McCormack: Are these both points for both networks?

Patrick McCorry: Exactly.

Peter McCormack: Yeah, let's go with Bitcoin first; just have an Ethereum break!

Patrick McCorry: Awesome, yes.  I always have this joke about Bitcoin developers, hopefully no one gets offended by this, but I always remember the assembly programmers who want to understand to the best of their ability every little detail about how the implementation works; straight down to how many sigs can you have in multisig; how many sigops are allowed; what's the exact microsecond to verify an ECDSA signature?  That's actually some of Andrew's work there.  So, back in 2016 he was working on secp256k1, which is an implementation on how to verify signatures, that was really way faster than what Satoshi was using before.  Andrew, do you want to talk about some of that work; why is that important to speed up verification of signatures?

Andrew Poelstra: Yeah, so what Satoshi was using was actually open SSL.  He was using just off-the-shelf, fairly general purpose crypto library.  It's not general purpose, but signature libraries that were designed to work with a variety of elliptic curves and use various slightly tweaked algorithms; they weren't really super optimised.  So, Peter Wuille actually, I think in 2014, no earlier than that it must have been; 2012?  I don't know, a long time ago, he developed the libsecp256k1 library that you mentioned, which I did a fair bit of work on in 2016.  I'm still a maintainer, but I really arguably shouldn't be because I'm not so active on it these days, which re-implements a lower level signature protocol, signing and verification that's used for Bitcoin script.  I know it actually is also used for Ethereum.  I think all of the major Ethereum clients eventually use libsecp for doing signature verification.  

This is a very widespread library.  Almost all bitcoin software, almost all Ethereum software, wind up using this because it is really hyper-optimised.  So, to give you a joke about Bitcoin developers being obsessive over the nitty-gritty details and stuff, it is kind of a fair joke.  As you mentioned, this whole other category of optimisation, there are people in the Bitcoin world working on Lightning and working on other scalability things.  But certainly the people doing like major protocol changes and who are working on Bitcoin Core, especially the consensus part and the crypto part of Bitcoin Core, they really obsess over the smallest of details.  

Actually a few of them over the years, a bunch of them, have worked for me on my team.  In a for-profit company setting, this can be extremely frustrating sometimes.  They will be really getting into the weeds on like specific stuff and I'm like, "Guys, we need to fucking deliver it.  Please, let's just move on", then it will go and back forth on some specific detail for weeks on end.  So, your joke, it made me smile, certainly; I think it's fair.

Patrick McCorry: Tadge, I've got a question for you as well.  So, this is also related to, I can't actually pronounce the protocol; UTREEXO?  Basically accumulator, so the idea is that I can validate transactions without keeping around everything that I need.  I think it is UTXO said, "You don't need to keep that around anymore".  Do you want to talk a bit more about that?

Tadge Dryja: Sure.  I guess there's very similar research done on the Ethereum side, so maybe we're talking about that, but I guess in Ethereum they call it stateless clients.  That name is like, "Well, they're still a state, it's just smaller", but anyway, a similar idea.

So, the idea of UTREEXO is, instead of keeping the whole state, just keep this route hash of it, which Ethereum has had.  Ethereum has this state tree and every block commits to the root hash and people have been talking about that kind of idea in Bitcoin as well for years and years and years, like, "What if we have UTXO commitments; what if you have a hash on the UTXO set and stick that in the block header or something?"  

The nice part about UTREEXO is it uses a hash-based accumulator with some really nice scalability properties, so that if the proofs are not too big and the proofs stack alongside each other, and there's a bunch of optimisations so that we can exploit a lot of the spending patterns of Bitcoin, and so there's a fun chart I like in the paper where it shows the popular lifetimes of a Bitcoin UTXO.  A Bitcoin UTXO is created as an output and then at some point it gets destroyed as an input, and how long does it live? 

The most popular lifetime for a UTXO is zero.  The most popular that a UTXO is created and destroyed later in the same block.  So, it's like, "You don't even need for proofs for that".  So, a full node doesn't even have to touch the database for that, and it's sort of this power log going down.  So we can exploit that fact and you can get pretty small proofs despite it being, in theory, it's linear proof size, which is different than a lot of the more mathematically complex accumulators, but it's kind of a simpler version that maybe initially people would say, "Oh it's too simple, it's not". But if you actually look at it and all the optimisations it ends up being a 30% data download overhead to do these really cool proofs, and actually gets really fast because you don't have to deal with LevelDB, which is both in Ethereum's case and Bitcoin's case, in many ways the big bottleneck.  

So, it's a nice way to not have to worry about databases.  Those databases do grow unbounded in both Ethereum and Bitcoin's case.  While there are block size limits are gas limits they are mainly in, I shouldn’t speak for Ethereum, but I know in Bitcoin's case there's no real definite bound on the UTXO side.  It could get huge; it doesn't; it's about 70 million right now; but if someone really was dedicated and wanted to attack, maybe I shouldn't say this, they could really make it a lot bigger because it sort of treads water right now.

Vitalik Buterin: Right, now in Ethereum there's no in protocol hard limit on the state size and right now, I forget, it's somewhere in the hundreds of millions of objects; so I want to say around 400 million, but it depends on like what you count and are you counting account stores, slots, hashes and so forth; but, it's in that ball park.

Tadge Dryja: Yeah, so it is a bit bigger than Bitcoin's, but it's still same order of magnitude; it's not exponentially bigger or anything like that.  But in both cases it is a long-term war.  Ethereum, I guess you're thinking more of ETH 2.0, but in Bitcoin it's sort of, "Yeah, this is a long-term scalability issue".  Historic blocks, once you have verified them, you can't really get rid of the current state if you want to keep validating, so that is what the UTREEXO idea is; now with this, you can get rid of the current state.  

The only downside is you need to accept these proofs and some nodes on the network have to be bridge nodes.  So, a lot of the work was how do you make it easy to run a bridge node?  So, someone can run a bridge node on a regular old laptop, in order to let other people run nodes on much smaller computers.

Patrick McCorry: Just to bring this back to maybe a more basic audience who aren't highly technical, the state actually is everyone's balance on the network.  So, if I run a Bitcoin Core on my laptop, I don't want to keep around everyone's balance in my laptop at all times, especially when someone hasn’t spent their coins in six years because they are hardcore hodlers.  So what you could do is use this accumulator to store a little bit of information, and then every time they go to spend their coins, they provide proof that they still own these coins and they're still valid, that's the whole point of it.

Tadge Dryja: Yeah, philosophically it's kind of nice, because right now the model is, yeah, you do store everyone else's coins on your computer and it's kind of annoying.  It would be better if it is the responsibility of the people who own those coins.  You have to keep track of your private keys and in this case, maybe you also have to keep a proof that your coins exist so that you can prove they exist to everyone else when you're spending.

That's not really part of a lot of these models because it sort of conflicts with previous nodes.  If you're starting Bitcoin from scratch, maybe you could do it that way and it would be really scalable and really cool, but because it's so hard to change Bitcoin, you need these nodes to bridge the old and new software.  Maybe someday you could get rid of the old version, but probably not for any time soon.

I think what's really cool to highlight, and this is why I brought this up, is that while Bitcoin lacks any substantial upgrade as features, you know, it's not like the scripting language has been improved, they're not really adding new features to Bitcoin, a lot of these changes aren't node policy changes.  You can change your local node and you don't need to tell the rest of the network you're doing this, but you can just work on making that actual clan way more efficient for verifying transactions.  You're not changing the protocol, you're just changing the software, and anyone can do that locally in their computer.

Now, what I find in the Ethereum world is that a lot of the focus is on changing the protocol and not necessarily the software.  I always get the impression that the software is just staying alive, keeping its head above water, and a lot of the focus, and these are the vocal points, is not really about the changes to the software itself to make it more efficient.  Maybe, Vitalik, you can comment on that more; is that a fair reflection; is that wrong?

Vitalik Buterin: Sure.  So, I think my mindset towards scaling is that there are scaling techniques that get you 3X and there are scaling techniques that get you 100X, and so obviously you should spend most of your time on the 100X.  So, that is why our mental effort tends to go toward sharding and L2s and so forth.  At the same time, there have been improvements for clients; you can even see this on Etherscan.  If you go to Etherscan and you look at historical Uncle rates, I think it's Etherscan.io/chart/uncles, then you can see how in 2017, we started to see substantial usage; the Uncle rates went really high.  And then sometime in 2018 to 2019 the Uncle rates just shot right back down to about 7%, so almost the same as what we had when we had empty blocks.  But block risks don't fall.  

What happened there actually was client improvement; it was improvement to propagation.  One of the clients, I think, actually implemented a thing where you propagate a block after you verify the proof of work, but before you verify everything else.  There have been improvements in block processing time; there have been improvements to databases.  A couple of months back, Jeff had something that reduced its data storage in nodes by something like 30% or so.  There have been fixes to bugs obviously.  As I mentioned in 2016, we had this huge marathon that basically reinforced us of all of the quadratic execution attacks in Ethereum.  

So, both things happen and it could also just be an artefact of the facts that for whatever, contingent cultural reasons, researchers are louder than developers, that while both strands of hard work are happening, people hear more about one rather than the other; but again, we've been trying to kind of balance that out recently.  We've been seeing on the Ethereum blog that there's work on what we call the ETH 1.x initiative.  Some of that is protocol changes, but a lot of that is also client changes.  

So, I guess as you mention, we have work happening on stateless clients, but it's not necessarily fully stateless; sometimes it's also partially stateless, where instead of having the entire states targeting nodes, having say a few gigabytes of state, and relying on proofs for everything else.  So, those things are all happening and they're happening in parallel, which is definitely something that's sometimes hard to get across, right?  It's not a matter of, "Oh it's a choice of one or the other"; it's, you do this thing and you do the other thing.

Patrick McCorry: Awesome!  So now what I'll do is I'm going to move onto the other side of scalability.  Even this can be categorised down to two things.  So, the question is, how can we reduce the computational load on the network?  The first approach is fancy cryptography.  One of the ironies in cryptography research is that cryptographers mostly focused on privacy for the past 20 years and now these SNARKs are evolving and Zcash is a SNARK for privacy.  But actually SNARKs are great as a scalability solution.  They allow you to provide a very small piece of information that's quick to verify, but actually in the background there was loads of computation that was done.

The other approach is just to take everything off chain altogether as much as you can.  Like a Lightning channel, Alice and Bob do the payments and they just send the final settlement to Bitcoin in this case; they don't send it to the network unless they have to.  So, that's one way of fancy cryptography a little bit.  I'm just going to keep this quite brief, because fancy cryptography you can go down a deep rabbit hole and I don't think people really want to go into a deep rabbit hole about fancy cryptography.

So one question is, why is fancy cryptography like SNARKS better than just improving the software; what is the big difference there?  Does someone want to summarise that for us?

Vitalik Buterin: Yeah, I think fancy cryptography basically provides the 100X that I talked about.  The reason I think why this big move towards general purpose zero-knowledge proofs that we've been seeing over the last decade has been so powerful is that you can invent the primitive once, and then once the primitive is invented then people can just go and use it for whatever things much more quickly, right, because it's general purpose and so it's not about creating new protocol, it's about just applying an existing protocol with a slightly different circuit or whatever.  

So, the reason why it's 100X is basically because you can generate these proofs where you perform a one-time effort to make a proof that says, "All of these statements about all of these transactions are valid", and then you publish it to the chain; but then verifying that proof becomes this very efficient, quick operation that takes a fairly small amount of time, almost regardless of how much information it ends up proofing.

ZK-Rollups are one of these scalability strategies that rely on the fancy cryptography, so the idea's basically like this hybrid between Layer 2 and Layer 1 in some ways, but you don't take everything off chain; you still have a few bytes of data.  I think it's about 16 bytes of data on chain for every transaction, but then you also, instead of providing all the signatures and doing all the computation directly verifying them on chain, you just have a proof that says, "I know about signatures and I have personally run the computation that says if you take this previous Merkle hash of this data, you apply these transactions, then you get this other Merkel hash.  And I know this and I have proven this and then, here is a proof and you can just verify this proof that proves that I've run this computation; it's verified this entire whole bunch of stuff".

So, it's very powerful.  It also gets into one of the trade-offs that I think we see a lot of the time where the fancier cryptography, and this also applies to BLS signatures versus Schnorr for example, the cryptography that's fancier and relies on that kind of harder assumptions and is much more challenging at the lower levels often ends up presenting a much simpler and a kind of more easy to use black box at the higher level rates.  So, you are trading off complexity on one side for enough complexity on the other side.

So on the one hand, yes, these SNARK protocols are fairly tricky and complicated, but on the other hand once they're there, once you turn them into libraries, you have Socrates and Zircon and so forth, then maybe you can just go and use that.  Developers that don't even understand the nuances of SNARKs can go in and build privacy preserving things.  So, Tornado Cash and scalability and things like Loopring and so forth.

Patrick McCorry: Yeah, I can pick up on that.  So, there's two real points that were made there.  One was about rollup, and I want to cover rollup in about two or three questions; that's sort of like Layer 2 in a way.  The second one was the assumptions for SNARKs.  SNARKs are very complicated, they're fancy cryptography and they add in additional assumptions that people may or may not be comfortable with.  So, this is what I want to ask Andrew for now.  Andrew worked on Bulletpoofs which is another SNARK.  It's not implemented in Monero is inside Liquid network as well, and that was also really focused on privacy so you can have privacy-preserving transactions in these different blockchains.  I just want to get your impression, Andrew.  Why is Bulletproofs not in Bitcoin yet?  We're still struggling to get Schnorr in the Bitcoin; I asked some Bitcoin wizards, back in 2014, when's Schnorr going to get in, and I was told in a few months; and it's now six years later and it's still not in.

What's holding up Schnorr?  Actually, let's talk about that first.  What is holding up Schnorr, because I know you have some good answers around that?  Schnorr is a very simple signature scheme.  You guys are building really cool stuff on top that aggregates signatures into one big signature.  What's the hold up; what are the problems you've run into of that?

Andrew Poelstra: Yeah, that is a great question, I will try to summarise it fairly quickly.  So, we were talking about Schnorr, as you say, like six years ago.  We first encountered it; it looked like it was a much simpler signature scheme.  It was provably secure and a much saner model than ECDSA can be proven secure in.  It appeared to be a lot more efficient.  To verify Schnorr signatures, you don't need to do an operation that's called a modular inversion that you do need to do to verify ECDSA signatures.

What happened actually, a lot of the initial delays, believe it or not, is that we realised we found better ECDSDA verification algorithms, and Schnorr actually seemed like less of a pressing thing for a little while.  And it seemed like all these efficiency benefits that we thought we would get, we kind of weren't going to get.  These kind of philosophical approval security benefits, sure, but ECDSA has been used for a while, since 1990, and I think in practice we're basically comfortable with ECDSA.

Then the other big benefit of Schnorr was that you could do multisignatures with it very efficiently.  By multisignatures here, I mean a specific form of multisignature where you have multiple parties who can combine, bridge a single key that represents all of them, and then they can interactively produce signatures for that key, so that all of them need to participate to send a message; but what the blockchain sees as just one key, one signature.

There's sort of an obvious way, if you naively look at the Schnorr verification algorithim, to do this where you take the elliptic curve points representing everyone's keys, you add them together in the sense of adding elliptic curves, and then everybody kind of does the signature, signing protocol, together.  Everybody does the first half of the protocol, they add together their components, then do the second half, and then they add together their components of that, and then you just take the sums.  Everything's just sums, so it's so simple.  

We ran into a series of problems with this, I guess two categories of problems with this.  One was that we realised that if we use the Schnorr signature as described in the Schnorr paper, where we didn't commit to the public key, there are certain weird Bitcoin things, weird crypto currency things, that we might want to do that would suddenly become insecure.  So, there is this worry about needing to commit to this public key and there's this trade-off between committing to the public key and then deviating from standards that were out there; and then not committing to the public key and then having this kind of fragility that might cause problems via crypto systems.  

The bigger issue that we were having with Schnorr signatures though was that, to do this multisignature scheme, we ran into a lot of trouble.  The scheme I just described, where you just add things together, that's just woefully insecure.  It's very easy to break that.  So, Peter Wuille and Greg Maxwell and I came up with this different, fancier scheme that we thought would be secure, that would eliminate a bunch of these attacks where basically everybody's public key would re-randomize it in a certain way.  You take a key, you would hash it and then multiply the key by the hash, and then that would just kill any structure in the key and then you couldn’t use it to do these structured key attacks.

Do you know that's also broken.  There's something called the Wagner's algorithm out there that can be used to break quite efficiently.  So then, we came up with another scheme.  Actually, we submitted this to, what conference was it?  Maybe Financial Crypto or something?  It got rejected by a reviewer who was like, "Hey, you didn't say it.  So, first of all your proof sucks", which it sucked because it wasn't secure.  "Secondly, why didn't you cite this 2006 paper by Mihir Bellare and Greg Neven?"

So we look at this 2006 paper and it's just something complimentary but different from what we were doing with multisignatures.  It gave us a way to take a whole bunch of different public keys and get a single signature where, instead of collapsing the keys into one, you still have separate keys but you get one signature.  And so that is something that we now call signature aggregation.  So, then we went on this long, I guess it was over a year, detour thinking about signature aggregation versus what we now call key aggregation, and trying to separate those two and figure out which of those two things we actually wanted in Bitcoin; what was going to provide the most bang for the buck; what was going to be the most efficient; and then also, what is going to minimise byte shedding; what can we come up with a coherent proposal for? 

Patrick McCorry: Andrew, just before you continue, can I try to separate what that means, just in case we get into a discussion?  So, the original goal was that you had ten parties, they all want to do a signature together, you want to combine those signatures and then they authorise the transaction.  I think what you mean by signature aggregation was the second thing, based in the story, this is the part where you just discovered it.  Does it take any one signature and just combine them altogether?  Is that correct?

Andrew Poelstra: Yeah, that's correct.  Yeah, so an important difference there is when we're talking about key aggregation we're thinking about a bunch of people who all own like a single coin and they want to have that coin be controlled by a joint key that represents all of them.  When we talk about signature aggregation, maybe there are multiple people with multiple coins all doing their own thing, but we want to be able to somehow collapse all of their signatures into one.

So, for a little while we were super excited about signature aggregation.  We're like, "Oh, actually this is a more efficient thing.  There are more signatures than keys on the chain".  That's a weird statement and I'm not going to justify it.  But, we really want to collapse signatures together and collapsing keys together is maybe something that's less of a priority or something that we can do a different protocol, or I don't know.

Then, when we started trying to design an actual protocol for signature aggregation, we ran into a whole bunch of weird interactions with other stuff.  It would change the transaction verification model in Bitcoin Core and other Bitcoin verification software where, rather than a transaction being valid as every input is valid, suddenly there's like this weird interplay between all of the different inputs.  It would compromise our ability to soft fork new features into Bitcoin, because when you are deciding how to verify an aggregate signature, you need to see some input data from each of the unaggregated signatures as a verifier; not as much, but you need to see a little bit.

Then, most of the soft fork mechanism we had in mind might actually change which signatures were available to different verifiers.  So, suddenly things that used to be soft forks had become hard forks and people were upset about that, and it was going to force us to change our thinking about what different fork mechanisms might look like.  There were bad interactions with blind signatures; there were bad interaction with some other like weird signature protocols; and it really just kind of blew up in our faces complexity wise.  So, we kind of set that aside and then went back to looking at just key aggregation.  

So, we came up with this new scheme called MuSig, which basically provides both improvised key aggregation and also provides signature aggregation.  I want to summarise I think.  We went back to just looking at key aggregation, saying, "Let's just try to have multiple parties who all produce an aggregate key".  I think actually the reason that we really started looking at that was we came up with Taproot.  I might be getting the history in the wrong order here, but this is all still a long time ago; I'm not even yet to the present.

But, when we designed Taproot, we had this protocol, we submitted this, I think it even passed peer review.  I think it was fairly far along in the publication process and we kind of got blindsided.  A few different others, Ford, Neven, two others who I'm blanking on, published a paper showing not only that our MuSig preprint was insecure, but that it was impossible to prove any such scheme secure using the techniques we were using.  It was kind of a ridiculous proof.  It wasn't like our paper was wrong; it was our paper couldn't possibly be right. 

Patrick McCorry: Wow.

Andrew Poelstra: Yeah, it was kind of funny; I was kind of blindsided.  That's not the kind of rejection you want to get, right?  You don't want to hear you're wrong.  You don't want to want to hear you're wrong, you certainly don't want to hear, "It's impossible for you to be right".  It's like I have a proof that you’re wrong.  So, then we had to modify.  That is your fairly straightforward modification to the protocol.  We added an extra round to the protocol; it became three rounds instead of two.  

Then we were able to get to the proof to go through, and it kind of freaked us out that we had written up this proof and none of us had noticed this mistake in our proof.  Not only that, but there were multiple other papers that have been published, going back like 20 years, and these other papers have the same mistake in them.  So actually, there are a bunch of different protocols that were knocked out by the same paper.

So, then we went back and then there's this whole thing of fixing the protocol and, you know, the building our confidence that were able to design secure multisignatures after all back up.  Meanwhile, the press were getting a start from folks on Twitter and from people just coming and saying like, "Schnorr invented this scheme in 1990, but you're talking about putting into Bitcoin in 2013". 

Patrick McCorry: That sounds like me!

Andrew Poelstra: Right, yeah, I mean I was like, "Guys, come on, this is a joke, okay.  It's so simple, you can fit a Schnorr signature; you can fit the equation; you can say it; it's s=k+xe; that's it.  How hard is that?  It's one equation".  So, we had that difficulty and then when we started actually trying to implement the multisignature stuff, we ran into other issues related to communication between the different signers.  

So, MuSig itself actually isn't too bad in this respect, but when you try to do threshold signature you start getting into really complicated things where you need private communication channels between parties, you needed reliable broadcast channels between parties, you have to think about authentication, you have to figure out how are you talking to different people, how are you ensuring that different people are receiving the same data, etc; there's protocol difficulties with this stuff.

Meanwhile on the Taproot implementation front, because we landed on Taproot as the way to get Schnorr signatures into Bitcoin, it's really quite a compact, simple protocol that doesn't have a lot of surface area to be byte shedding.  Meanwhile on the Taproot front, we're implementing that, we found, and here's like classic Bitcoin hyper optimisation, we found that in some cases like fairly normal wallet transactions, there might be one byte larger in Taproot than they would otherwise be, one byte; because our keys are 33 bytes in secp256k1, and otherwise we might be able to use like a 32-byte hash or something.  I don't remember exactly under what circumstance you wind up comparing a hash to a key, but that's the deal.

And so, we said, "Okay, let's compress the keys to 32 bytes".  Then we spent forever doing that, and there are multiple ways that you could do that and eventually we settled on one.  I don't know, it's just the whole thing.

Patrick McCorry: Maybe I can summarise this?

Andrew Poelstra: Okay. 

Patrick McCorry: Because I've got to say, listening to this story, it's really hard!  Schnorr looks easy, but even that's hard.  I guess Vitalik's original point was that because Schnorr has a very basic assumption, it's just a screen log, one question I wanted to ask was, if you use something like BLS for pairings you sort of get a lot of this just out of the box.  So, what's the motivation to go down the Schnorr route and really pursue these new protocols versus adding in a pairing assumption and just taking BLS out of the box, where it's like what Ethereum would do, or what Ethereum did do?

Peter McCormack: Paddy, you can't keep calling it "Snore"!

Patrick McCorry: "Snore"? I can't pronounce it!

Peter McCormack: It's Schnorr.  You call it Snore; come on, man!

Andrew Poelstra: So, one reason is that I think historically, we did actually expect Schnorr would be significantly simpler.  Now, we have seen all of these issues and you're right; almost all of these issues, you run into them, and you're like, "BLS would have solved this", because BLS doesn't involve any randomness.  The keys are automatically aggregatable, the signatures are automatically aggregatable, there is a lot of nice simplicity that you get.

First off, as you say, there're a bunch of concerns about the pairing.  So, if we introduce the pairing assumption; so for the audience, a pairing assumption is a cryptographic assumption.  It's some computational problem that needs to be very difficult, otherwise the signature scheme is insecure.  This is just how you do cryptography, like every single crypto system.  You think about some underlying hard problem that we've thrown like 10 or 20 or 50 years of research trying to do it efficiently.  We can't figure out how to do it efficiently and then we just assume it's impossible and then design crypto systems assuming that it's impossible.  So, in Bitcoin we have these very simple standard assumptions: the elliptic curve; the discrete log problem.  Pairings had introduced a new one.

So, there is some suspicion about pairings because pairings are relatively new.  I think the BLS paper was from 2006 or was it 1998?  I'm blanking.

Tadge Dryja: Pairings definitely started to exist in the 1990s.  I think they were originally introduced in the context of being attacks on elliptic curves, right?

Andrew Poelstra: Yes, that is right, yeah.  So, that was in the 1990s and then the BLS paper was the first kind of positive use of pairings.  So, pairings are a bit younger, I guess ten years younger than more standard elliptic curve assumptions.  They came into being as a method of attack on existing elliptic curve schemes, which I think put a bad taste in a lot of people's mouths.  Like, this is some weird surprising algebraic trick that breaks things; this is a breaking thing; it's a bad thing because it breaks stuff.

I really do think they're like a subconscious aversion to pairings for that reason.  But also because it's younger and pairings had not really been used in production systems the way that elliptic curve signatures have, the way that ECDSA has.  I'm not aware of any major system with any real money or any large number of users that uses BLS that isn't part of very recent, like the last two or three years, cryptocurrency kind of stuff.  And, even then I'm not sure that anything was just big and out there.  So, there's suspicion of these pairing assumptions because they haven't been tested in real life.  

Then there's another category of suspicions related to pairing software not being as mature and there not being production crypto libraries that do pairings that are out there and that have been battle- tested and so forth, that will provide constant time signing, or that will have the kind of QA cycles that we've had on something on like the libsecp, which is very widely used, or even that we've had an open SSL which has certainly had its fair share of problems, but it has been out there for 30 years and a lot of people use it and some people look at it.

So, Bitcoin has a real aversion to changing its security model, ever.  This particular change is one that really, I personally wouldn't be so comfortable with.  Then there is also this kind of common knowledge issue where I think that the community wouldn't like it, so I'm not going to bother trying to propose it, or I'm not going to champion it, because I think people are going to get upset with me.  It may be that other people are getting upset because they think other people are going to get upset, and actually maybe we'd be fine with it.

Tadge Dryja: So, everyone is upset then?

Andrew Poelstra: Yeah, but actually in the case of pairing, I think that actually there is legitimate opposition to it, just because it's like a new and kind of less tested thing.  Then another aspect is of performance.  It is slower to verify pairings than it is to verify regular signatures by a factor of about ten, I think, although maybe we've reduced that to five.  There is some work reducing it to five.

Then there was that 2016 paper that was an attack on pairings that required all of the parameters to be increased.  Then after that, all the efficient improvements got eaten up because he had to increase the parameters.  I think it was briefly more competitive performance but then it turned out we had to increase the parameters.

Patrick McCorry: So, to summarise it just so we don't go on too much about performance, the reason why BLS is not introducing to Bitcoin, where it will give you all these lovely unicorn properties that make everyone really happy, is mostly because one is slightly slower at the moment.  It's a younger technology in a sense, and it is not bottled tested as well as ECDSA, or I guess Schnorr in this case, and it introduces a new trust assumption that didn't exist before.  But the trade-off is, it means you need to design new protocols that work on top of Bitcoin to get similar properties, and they're quite hard to get right so far as well, so it's like pushing complexity away from the protocol but more on the client side implementation and for what the nodes do.

Tadge Dryja: Yeah, and I think like pairings are definitely a kind of devilishly hard; I think pairings are the only cryptographic protocol where even after I wrote an article explaining how pairings work, I still don't feel like I have a full intuitive grasp of like how the heck does this thing possibly exist.  So, I think it's definitely fair to have that kind of aversion to it.

In terms of just efficiency of libraries, on the Ethereum 2.0 side we've been spearheading this standardisation effort and this includes the various other iterations and are also now talking to Filecoin, Algorand and Dfinity and these other groups.  So, there's definitely been a lot of work on first of all, standardising around the BLS 12381 curve and this is the one that everyone are kind of rallying around after some of the recent attacks.  Zcash itself has upgraded to it and then also creating these really ultra-optimised implementations that there is now a lot of pressure to do, because the ETH 2.0 protocol really heavily ends up relying on the aggregate signatures and multisignatures.

So, things are improving but I definitely agree that like efficiency-wise there's going to be this fairly durable difference between how long it takes to do a pairing computation versus how long it takes to just multiply elliptic curve points by a scaler, which is what you have to do in the simpler protocols.

Patrick McCorry: Awesome.  I'm going to move topic quickly as well; I just want to finish off with this final topic.  I guess one big thing we've heard throughout this talk in Bitcoin is that Bitcoin doesn't want to do anything on the blockchain; it wants to keep everything off chain as much as it can.  It even wants to reduce the number of signatures you have to verify into one signature instead of ten; that's the whole point of the MuSig stuff.  The whole point there is just one sig in the block chain, that one signature's going to do everything.  It's going to hide everything and everything's going to be inside that signature.  The idea there is that everything's off chain.

Now, there's two real big themes in the off-chain world.  One is the Lightning network and one is these plasma rollups.  I just want to give a quick summary so that everyone has a high level idea what these are about.  The Lightning network's fairly straightforward.  You have two parties, Alice and Bob, they lock up coins in this big black box, so Alice puts in one coin and Bob puts in one coin.  When Bob sends a coin to Alice, he sends a message and he gives it to Alice and he says, "Alice, you're now the owner of two coins", and that is kept private between Alice and Bob.  When Alice wants to pay Bob, Alice will then send a new message to Bob and say, "Bob, you're now the owner of these two coins".  So, you basically have this black box where the coins are going back and forth rapidly between Alice and Bob; and, all of that is kept off chain.  When they're done, they close the channel, they send a final balance from the blockchain and it's confirmed.

Now, the nice thing there is that Alice and Bob don't have to trust each other.  When they do a transfer it's redeemable, it's confirmed.  They could instantly send that to a Bitcoin and get their money out fairly quickly.  With the Lightning network you basically have multiple channels and the idea is that if Alice has a channel to Bob, Bob has a channel to Caroline and Caroline has a channel to Dave, Alice can pay Dave, Bob and Caroline.  They can synchronise a single payment across that route; that is the whole point of the Lightning network.  

But there are loads of problems around that.  Everyone has to put collateral into these channels, so it is very collateral heavy, and you have to find a route that connects Alice and Dave and that route might not always exist.  And, it's only really good for pairways payments; it is not good for multiparties like having ten people go join.  Well, you need swaps, so it wouldn’t work on Lightning; it only works for a small set of parties at any given time.

On the flip side, in the rollup world and the plasma world, the idea there is that instead of trying to confirm these transactions quickly, why don't we just do a big batch.  So you have an operator, the operator waits around and Alice comes and she sends a transaction.  The operator's job is to take all of these off-chain transactions, create an off-chain block, and periodically commit that to Ethereum or the base chain.  So, they're basically just a block producer in that case.  In rollup, you post all of the data to the blockchain; in plasma, you keep the data off chain and you just post these little hashes, these checkpoints.

The trade-off is that you have to wait around maybe ten minutes, one hour, until the operator closes those blocks.  So, your transactions aren't really redeemable or confirmed until they're confirmed on the blockchain.  Now, the point of using the Stark Stare and these zk-Rollups is what you are saying is, "I post a block; I can remove a lot of the data; I don’t need most of the data now; I just need enough data so that the blockchain can keep a record of it".  And the SNARK basically proves that all the transactions in this bock are valid.  So, this rollup block can have 100,000 transactions, it gets posted to the blockchain, but all you have to do is verify one tiny proof, which is probably the same speed of a single transaction.  Then you can verify that this entire block of transactions is correct.

So, what you are really keeping off chain is the computation.  The data still hits the blockchain but most of the computation's kept off chain.  So, it's a bit like the Bitcoin ideaology; you should do minimal computation on the blockchain as much as you can, and you take all of that off chain and the operator just batches that altogether.  That's like a very high-level overview of Lightning versus rollups and if you didn't understand that, I'm sorry; I mean you guys do, but anyone listening, because trying to describe that within two minutes is hard.

One thing I wanted to bring up, because I guess, Vitalik, you've been talking about rollups a lot; what do you think's the hold up so far?  I mean the gas prices are a bit crazy at the moment, they're 400 to 500 gwei; $6 to do a single transfer.  What have you been thinking have been the bottlenecks for the rollups?

Vitalik Buterin: So, there's two families of rollups, right, there's the optimistic rollups and the zk- Rollups, the main difference being that optimistic rollups use fraud groups so the operator publishes a block and they just publish what they claim is the results.  Then if they're wrong, then someone else would actually basically publish this fraud group.  In that case, what happens is the entire computation of that particular block would actually be run on chain.  So, it's this kind of interactive game reel when we do a computation.  If someone thinks that someone else is wrong, whoever's wrong ends up losing a bunch of money in their deposit; or you have zk-Rollups where instead you use these zero-knowledge proofs to just directly proof validity without needing interactive games for it.

Surprisingly enough, it's the zk-Rollups that came first.  You might think that optimistic rollups are simpler because they don't rely on the fancy stuff, but I think the reason why the zk-Rollups came first is because the optimistic rollups are trying to do more, and they're trying to do more because for now they can do more.  Basically, the challenge is that zero-knowledge proofs are tentatively really good at structure computations.  So we have a computation, where you can put all the computation in a big table and you verity the exact same equations millions of times, and it's very efficient if you describe it in that format, which is good for say, hash verification and signature verification.  But for general purpose computation, efficiently making a zero-knowledge proof is much harder, right.  It's considered to be one of the holy grails of SNARKing to be able to SNARK general purpose virtual machine execution.  

There has been progress, so for example the Aztec Team, which did PLONK, they released this thing called PLOOKUP which allows efficient proving of functions with look-up tables, which is actually a significant boost to virtual machines; and, there was Cairo from StarkWare and I think they use some similar technology in some areas but not in other areas.  So, it's been improving, but it's still quite far from being able to practically verify VM execution.

Whereas an optimistic rollup of like, "Hey, you've got the VM and you can just go and write whatever verifier you want and it's not particularly hard", right.  So optimistic rollups, they're trying to do more.  What I mean is they're trying to support basically an EVM equivalent environment on top of Layer 2.  That's something that developers love because for a developer, what that means is that they can just take your existing application, they just hit compile and hit deploy again, just using slightly different software, and it works roughly the same way that it worked before, except the fees that would be 100 times lower; whereas with the zk-Rollup, it's just for mail payments or decks or a couple of other use cases for now.   On the other hand, zk-Rollups, we already have Loopring on chain, we already have zkSync on chain and we already have StarkWare's DeversiFi on chain and in the rollups.

I'll briefly mention the third category, plasma, which has more off-chain data, the OMG network which is also payment-specific once again.  The more general purpose application things don't yet exist and it looks like technology for those will be ready in a few month, but that is the thing that I think most people are waiting for because ultimately, if Ethereum is all about doing things that are more than just moving coins around; so that's what people are expecting.

Patrick McCorry: I have one question for this and then I'll move onto the Bitcoin stuff, because this is obviously a Bitcoin podcast.  My impression of the roadmap, so you have ETH 2.0, and ETH 2.0 is a sharded solution and it's being built; but if you look at rollups, rollups are like shards, and the idea of a shard is that only the people who are in that shard, in that blockchain, care about validating it.  If there is a shard for hotels and a shard for train bookings, if I'm not booking a train, I don't care about the train bookings, I just care about the hotels because I'm booking a hotel.  So, do you not think rollups is a bit like getting a sharded solution into Ethereum through the backdoor?  It might even look a bit hacky, maybe a bit like Red Evolution all these currencies, they're all hacky and maybe that will take over too.

Vitalik Buterin: So rollups are powerful, but they do have the one limitation which is that you do need some amount of data on chain for every transaction.  The reason why you need data on chain is basically because, in order to guarantee that people can withdraw it, you need to have enough data on chain so that anyone can reconstruct the rollups' internal state, and the rollups' internal Merkle tree contains the balances and so forth.  So, it's about 16 bytes on chain for most rollups that are designed well; that's six times less than regular transactions, but then the stealing factor goes up to about 100, because then you get rid of all the computation.  

But there is this amount of data and so ultimately, the capacity of the system is bounded by something like 3,000 TBS right now.  And in the long term, if we want to get more users and more cheaper non-financial applications, we're going to have to go even higher than 3,000 TBS.  So, that basically means scaling the data layer and designing the data layer in such a way that you can verify its integrity without every single participant having to personally download and check the availability of all of the data.  So, that's kind of the core technical reason why I think in the long term, if we want to achieve the system's full potential, you basically need to have rollups and sharding, and stack the two on top of each other.  But if you have different trade-offs that you can definitely get to a medium level of scalability with just rollups.

Patrick McCorry: Yeah, so I guess I'm just going to summarise that.  So, I guess for rollups, because most of the computation doesn't happen anymore via the blockchain, the main bottleneck for scalability is no longer computation; it's really just data availability and sending data across the network between all the miners and the users, and that's why you can increase the throughput for that.  Now, obviously these techniques don't work in the Bitcoin world today and there's a very good chance they're not going to work in the Bitcoin world in the foreseeable future.  But Bitcoin does have the Lightning Network which is another type of scalability.  

So, I do want to ask of Tadge, so obviously Tadge was the inventor of the Ligtning Network, you were thinking about a lot of these protocols back in 2015.  Have you seen the evolution over the past five years?  I mean it's alive now.  The last I checked it was 30,000 Lightning nodes, but that's actually not a good start because some of them are hidden.  How have you found the evolution over the past five years of seeing your little baby be written up and designed and now deployed and implemented by a crazy Bitcoin army of developers?

Tadge Dryja: Yeah, so it's kind of interesting.  I'm really not as involved.  I have been working on UTREEXO and other systems in Bitcoin.  I guess it's because there's so much development in mining, and it's like, "Well, everyone else is working on it.  If all these other people are working on it maybe it doesn't make sense for me to work on it as much".  And also, because it's sort of like, because I sort of co-authored it with Joseph it's sort of called "mine" but it's not right; it's Lightning Network, anyone can use it.  So, it's hard to work on it because it's like, "Well, this was my thing but no, everyone else gets to use it".  

I don't know, I guess, Vitalik, if all these other people started working on Ethereum and you disagreed with what they were doing, it might be kind of annoying because it's not your network but at the same time it's like, "Well, I kind of came up with some of these things".  So, I think it's awesome, but there's a lot of things that I would change and so that bugs me.  Other people working on it, I'm going to focus on this UTREEXO and accumulator stuff and who knows, maybe a year or two from now, once that's like taking off and people are using it, work on some other thing.

And, I think it's sort of a good way to work on it because you don't want to have people in charge, I guess; at least I don't want to be in charge of things like that.  Overall, it's really cool to see and it's really cool that people are using it.  It did sort of weird me out in the beginning, when people have these little Lightning emojis in their Twitter, and especially with the Bitcoin Cash and block size and it was getting to be this thing and I'm, "Whoa, I don't want to get involved in that".  But technically, it's really cool and a lot of the stuff that Rusty and Lightning Labs and people are doing is really cool.

Patrick McCorry: Yeah, so one thing for Lightning I want to bring up as well; so Lightning, in my opinion, so we can all challenge this, I don't think Lightning's very good for payments.  What I think Lightning's really good for is synchronising to off-chain ledgers.  So, a really good example of this, there's a start-up called ZBD doing gaming on Lightning.  What is really cool about it is that I'm playing Street Fighter with someone else and we're fighting.  Someone watching the game can scan a QR barcode, they can send sats to the game and then they can buy me a power-up.  

But actually, that's not really a payment; what they're really doing is they have a ledger on BlueWallet which is in this case fully custodial, and they have another custodial wallet that is a game, but they're using Lightning to synchronise the payment across the two different ledgers.  I think use cases like that are mostly unappreciated at the moment, but I think that synchronisation between two ledgers, for me I think, is going to be the killer thing for Lightning, and not necessarily people buying stuff in a candy shop.  I don't know, I just wanted to see what your thoughts might be on that.

Tadge Dryja: Yeah, I am sort of surprised that exchanges aren't a support because initially when working on it, I was like, "We should talk to exchanges".  This is the premier use case for this.  Instead of having a deposit or withdrawal button, you now have a like fund channel and closed channel button and you're not custodial, right.  Because that's what a lot of usage is, is trading and exchanges and it seems great.  And, I did sort of think that that was how the network would build up, that Coinbase would have a big node and Kraken would have a big node, and then the users would have channels to these exchanges.  That hasn't really happened.  It is starting to, I think, and that would be really cool because that to me seemed like a great use for it, because you can get rid of the sort of custodial problem with all the exchanges.  So, hopefully that does go forward too.

Patrick McCorry: Yeah, so one thing I want to add there is, the idea there is that I'm an operator on Coinbase; there are these massive whales; you have coins in my service, but they don't always want to keep their coins in my service.  And, they set up a Lightning channel with me; they could quickly deposit, do a trade, come back, quickly withdraw and they can minimise their trust in me.  That's, I guess, what you were alluding to for that use case.  

So, I think there are two parts there.  I think one of the use cases I see exchanges could adopt is, you have like Bitstamp and maybe BitFenix; they have a lightning channel in the middle; and they just support transferring coins back and forth between the exchanges, because that's what most people use these cryptocurrencies for anyway; they just transfer tether from exchange to exchange to get some arbitrage moment.

Peter McCormack: Sorry, just to jump in there.  Isn't that also what Liquid is pitched at doing as well?

Patrick McCorry: Exactly, yeah.  So, Liquid is basically a federation.  Actually maybe, Andrew, do you want to describe that, since you are working on that project; maybe compare to Lightning, the differences.

Andrew Poelstra: Yeah, I get asked this a lot, so I guess I should come up with like a simple punchy one-liner.  Basically, the way that Liquid works is like with a separate blockchain, so everybody who participates in Liquid, there is an unlimited number of people who can participate, they take their coins, they peg it into the Liquid chain, by which I mean they send the coins into the custody of a federation, of a quorum of 15 federation members.  Then while coins are in the Liquid network, basically they just move around on the Liquid blockchain and every block is signed by a quorum of this federation, by I think 11 of these 15 participants; sort of sign, sign, sign.  Then later, when people want to move their coins back on the Liquid blockchain, they basically raise a flag that they have a special kind of transaction that says, "Please give me my coins back", and then the federation who has physical custody of the coins on the Bitcoin site, will then send the coins to the right person.

So, if you were using Liquid such that the federation consisted of two people, and those two people were the only users of the system, then it would be similar to a Lightning payment channel; it would be the same model.  But morally, the difference between Liquid and Lightning is that in Lightning, the people who have custody of the coins are the actual counterparties of the coins; and in Liquid, the people who have custody of the coins is this extra object, this federation.  

The benefit of that is that you can have an arbitrary number of participants who are all acting in sync, and you don't need to have payment channels that are chained off of each other and worry about scalability issues related to that.  And another benefit is that by having a separate chain of blocks, we can have confidential transactions and all of this cool, whatever crypto shit or experiments we want to do, we can do that on Liquid.

On Lightning, you have these individual payment channels which are two people who are basically maintaining a state which consists of a valid Bitcoin transaction, and anyone can close up the channel by publishing the transaction to the chain.  So, you're largely, but not entirely, limited in Lightning to doing things that you can do on Bitcoin; so the Bitcoin technology capabilities is the same as a Lightning technology capabilities.  Then also, when you're connecting more than two people, there's a bit of a technical thing where you create these PoWs that are all linked to each other.  But, I would say the biggest difference is the custody model, basically.  In Lightning your coins never leave your custody; in Liquid there is this federation trust requirement.

Patrick McCorry: I think there is a collateral difference as well, so in Lightning, if I am Coinbase and I have 1,000 channels the collateral lockup and management I have must be a headache; where if you use a sidechain like Liquid, the operator doesn't need any coins upfront for this system to run as a coin, when you go.

Andrew Poelstra: That's it exactly.  When you move coins into the system, you are putting up those coins and then they're in control of the federation.  And when you take them out, they come out of the federation.  There's never any collateral beyond what's actually in the system.  That's all, yeah.  It's always one-to-one, no prefunding.

Patrick McCorry: Actually, you have just made me think of this, so maybe you'd better categorise this, that you have a sidechain like Liquid, and you're trusting the federation, you know, any of those federation members.  A rollup is like going a bit further where you trust the base chain for security as opposed to a federation.  You lock your coin into a smart contract, they get unlocked in the rollup, but you still have this block producer who can really only censor your transaction; so they censor the block of your transaction, you just withdraw via the base chain and you get out.

And then Lightning's on a different angle.  No one can really see my picture I guess, so I'm using my hands to try to describe this!  Lightning, to me, is a nice way to synchronise these two different ledgers between sidechains and rollups.  So if you're Coinbase for example, instead of having a mass of huge channels of all your customers, you just run a sidechain or you just run a rollup and then use Lightning to jump in and out really quickly.  To me that seems like that ideal set up.  I don't know if you guys have thought about that before?

Andrew Poelstra: It's a cool thought.  In the original sidechains white paper, that refer to Blockstreams by announcement paper, we speculate a little bit.  I haven't looked at his paper since I wrote it in 2014.  But, near the end we sort of speculate on future direction, like, "Oh if we had like fully general zero-knowledge proofs, and if we had support on the Bitcoin blockchain, then we could do sidechains in this much more efficient or a much better way, where you have Bitcoin ultimately or the main chain whatever that is, ultimately enforcing all the transfers".  Then you have these magic, at the time unimaginable, zero-knowledge proofs making that something that was tractable.  That is very similar I guess on a high level to the rollups we're doing; so neat that I hadn't considered that connection.

Patrick McCorry: Vitalik, do you have any comments; you're quite quiet now?

Vitalik Buterin: You haven't directed any questions at me yet, so I'm just happy to listen.  No, I think payments and the plasma rollup family, I definitely like thinking of plasma and rollup as being the same thing, but they're different, but they're definitely more similar to each other than they are to channels.  They have different benefits and they have different properties.  So channels, for example, are vastly superior when you have repeated transactions, so any kind of payment for a subscription, they make a lot more sense.  Definitely, settlements between like institutional actors that each represent a lot of people; that is another one of those cases where you have a lot of repeated transactions and so on.  Anything channel-based ends up making a huge amount of sense.

Another important benefit of channel-based solutions is that you get instant transaction confirmation, as opposed to having to wait for some transactions to get included in a block to get some degree of security.  But on the other hand, the weaknesses are conceptually more complex.  You have these routing issues; you have the capital lock-up issues; you have potentially even a trade-off between the system becoming centralised versus the system having very high capital lock-up, and all of these things versus, on the other side, something like a rollup, or even a plasma.  

The ideal is a zk-Rollup because you can get all of the properties with basically no capital on top of what's stuck inside of the systems, right?  If there is $100 inside the thing, you need $100 to secure it.  With optimistic rollups and plasma, you can technically get away with the same thing, but it's better if you have more collateral because basically, that way what can happen is you can buy up people's withdrawal slots in progress.  When you start a withdrawal, normally the withdrawal will take say 7 days or 14 days or whatever to process, but someone else can just buy up your withdrawal slot, you know, give you 99.99% of the amount, and then they would put off the capital for the two weeks themselves.  And so, you would get the instant withdrawal experience and they'd get some interest for putting up the capital for that period of time.  So, it depends on the situation, it depends on the use case.  In the long term, I'm expecting all three of the design patterns to be popular in some form or another.

Patrick McCorry: If you guys want actually, what I can do now is I can summarise the discussion we've had in the podcast, then everyone can have a final word if you want, and then maybe Peter can wrap everything up; maybe that's worth doing now, because I know it's a bit of a long episode?!

Peter McCormack: No, it's fine.  I mean, it's 12.50 am in the morning but it's fine.  It's late for you and I, Paddy!

Patrick McCorry: I had a nap at 5.00 pm for this because I knew it was going to be late.  Okay, I'm going to summarise basically what we've spoke about for the past two hours.  We started with the origin story and the narrative.  Bitcoin was introduced to the world as a system to protect people from fractional-reserve banking in the limited inflation; Satoshi was never shy about that.

One thing we didn't talk about was Satoshi also implemented a casino in the original client.  I don't really understand why he did that, but he did.  It's been removed, I guess, since.  Later on, the story started to emerge as a sort of peer-to-peer network that was a decentralised, anonymous, scalable network.  That was when I got in, in 2013/2012, and obviously that wasn't true at all.  It's not this most traceable currency in the world.  But then this third application started evolving on Bitcoin; that was a building these applications on top . This is ex-Satoshi Dice, MasterCoin, coloured coin, Omni Layer that basically powered Tether for a lot of years.  And I guess that's what Vitalik and Ethereum picked up on, and when they started doing Ethereum they were going to build on Bitcoin, but they built enough critical mass and they just went and deployed Ethereum.

Then we sort of spoke about the Frankenstein systems.  Both of them have grassroots effort and they all have these very quirky bugs that make them both really awkward and difficult to deal with sometimes.  Bitcoin had lots of early bugs; it still has some bugs because you can't really get rid of them, like the signature for example.  Ethereum has lots of different attacks that we didn't even really explore, to be honest.  There are lots of problems in Solidity we could have explored.  The biggest issue is basically a security audit, because people deploy these smart contracts in production a lot of the time and then you get the security audit afterwards.  Wireless is too expensive and you don't get it done.  But obviously, there's a lot more work toward formally verifying these contracts and building better tooling to get rid of most of the common bugs like Vyper that Vitalik was working on.

Then going down through that, we spoke about scalability of the network.  Bitcoiners are like assembly programmers; they know every detail, they don't want any computation on Bitcoin, they just want Bitcoin to do signatures in small conditions and that's it.  And even with signatures, they don't want to verify ten signatures, they want to compress it to one signature, so all you have are signature and nothing else.  

But a lot of work on Bitcoin around scalability.  We didn't have to change the consensus rules, didn't have to change the protocol, it just made the software way more efficient.  I can still verify the blocks in a day, and that's a good effort in their part.  

In the Ethereum world it sounds like there is a lot of optimisation on the hardware and more metric was the Uncle rate.  I won't explain what the Uncle rate is, but it went up for a bit and then back down to 7%.  The software got more efficient because there was less forks on the network.  But a lot of the chains are focused on the protocol, you know, ETH 2.0 has taken a lot longer than everyone expected.  

Then we went on the off-chain scalabilities as rollup versus Lightning.  I think that's basically the summary of the entire podcast in a very nutshell.  Vitalik, do you want to go first with some last words of what you want to leave with?  Oh, and one last thing, the whole point of this discussion was really to highlight the differences and the benefits of both networks.  Hopefully you will all see that they both are complementary and have different goals.  I don't really need to illustrate those goals now, but they both have different goals.  Vitalik, do you want to just have some final words?

Vitalik Buterin: Oh, I see, I guess it's a reverse alphabetical order this time around!  Again, I definitely agree with all that.  I think there are definitely some different goals going into a lot of the systems, the two systems, though I think there are also a lot of shared values as well at the core.  We are both trying to build maximally trustless and secure systems and help people to do things without having to basically give all of their assets to a centralised intermediary for the duration, as much as possible.  There's definitely enough different applications that are being emphasised to different security trade-offs around the edges.  But, whether it's ideologically or even technologically, as we've seen with how both networks use the secp library for example, there's definitely a fairly big shared core and that's something that probably is important and not to forget as well.

So, I'm just looking forward to the next ten years, seeing how the rest of the technology ends up getting rolled out and how the systems have come to fruition.

Patrick McCorry: Tadge, do you want to go next?  I'll just go in reverse order now.

Tadge Dryja: Sure, yeah.  I actually worked a little bit on Ethereum before there was the whole sale and stuff, like some of the hash function they looked at for their approval work, and I talked to people back then, but to have them working mostly on Bitcoin, but it is interesting.  I definitely do read ethresear.ch or whatever, and try to keep up, and to some extent there's a bit of a loss because of the different philosophies and different things there, because there's not much communication between the two groups.  It makes sense to some extent, but at a conference in February before the whole COVID thing, I remember talking to some people who were working on stateless clients and Ethereum and it was like, "Oh we should talk about this more", and then we left and we're sort of in different worlds.

So, I think it would be cool if we got to learn about how Ethereum solves similar problems that Bitcoin has, because at the core they are dealing with like very similar issues, right?  We both have these big LevelDB things with millions and millions of entries and it's like, "Okay, how do you scale these things?"  They're taking very different approaches, but it makes sense for people who are interested in Bitcoin.  So for example, this podcast is a good example where yeah, I am mostly interested in Bitcoin but I definitely keep an eye on Ethereum and try to read what they're working on and how they're doing it, and hopefully vice versa.  People in Ethereum can look at what Bitcoin's doing and take libraries and both ways.  So, hopefully you know it can be a mutually beneficial thing.

I think right now, it is somewhat stand off-y, like there's an autonomy direction.  I don't know maybe that'll change in the future.

Patrick McCorry: Andrew do you want to go next?

Andrew Poelstra: Yeah, the comment about there being a lack of communication and that being an unfortunate thing.  I also speak to a fair number of people in the Ethereum world and generally in the altcoin development world.  Usually, I'll just encounter them when I'm visiting friends in Boston and the MIT area; I'll just sort of wind up in a room full of people who I don't otherwise tend to talk online or at conferences or whatever.  Part of the reason, I would say, or certainly a large part of the reason that I don't tend to reach across the aisle so much is that, especially in 2017, there was a lot of irrational exuberance, a lot of like scam coins and things like this, that for a long time were using Ethereum as a basic platform.

As Vitalik has talked about, how things get more expensive and maybe bad ideas get priced out and unfortunately maybe some good ideas, things certainly have calmed down and the Ethereum world is much less irritating than it has been in the past.  There's a lot of cool genuine research versus people who have whitepapers full of fluff, where they've claimed to have solved the problems that are impossible to solve and so forth, and who are raising billions of dollars in exchange for just this kind of fluff.  We have seen that go away from the wider cryptocurrency space since 2017 and in particular, I think there's a lot less of it in the Ethereum world.  In its place I'm really excited by what I see.  I'm really excited by the kind of research into different plasmas; I'm excited by the research into off-chain stuff; I'm excited by zk-Rollup stuff; I'm excited by zero-knowledge proof development in general.

We didn't go too much into these kind of philosophical disagreements in this discussion, for better or worse, but certainly I would say, Vitalik, almost everything that you guys propose for ETH 2.0 I think is just like you are biting off more than you can chew and it's never going to happen.  And, I'm really glad that you all disagree with that assessment and that you're driving forward the kind of basic research that is needed to create these things, because if we have efficient zero-knowledge proofs, if we can achieve that holy grail, then that would be incredible for everybody, including Bitcoin and including a lot of the research that I do work on day-to-day.

Peter McCormack: All right, well listen, I'll close out by saying I pretty much didn't understand 90% of what was discussed this evening, but I'm glad everyone could come together and have a chat and it was useful for you.  It kind of proves something for me that actually this stuff, some of it is way too technical for certain people who might be investing and actually, I don't think people do always need to hear this.  I think sometimes, actually perhaps a show sometimes that I do with you, Tadge, or you, Andrew, where I get you to explain the fundamental, the basics, the simple bits I need to know, helps people to have that limited, small amount of knowledge.  

I'd be interested to see the feedback on this.  I think there'll be some people who'll be, "Great, really glad to hear it; the fact that they've got Tadge and Poelstra and Vitalik altogether", I think it's going to be mind-blowing for some people.  I think some other people may switch off and go, "That was too much for me".  So, it'll be really interesting to hear the feedback, but if people like yourself, Tadge, and Andrew, you can benefit from some of the research and perhaps build some relationships out of it, then I think that's a good thing, but yeah, let's see what the feedback is.

But, thank you all, it's 1.00 am in the morning, I'm shot, I know Paddy's going to be a little bit as well, but yeah, I appreciate you all coming on and I wish you the best and let's see what people think of this.