Back to basics: how Satoshi designed Bitcoin for censor resistance
TL;DR: Crypto networks require a speed limit to ensure censor resistance, the property of the Bitcoin network that sets it apart from all other computer networks. Satoshi’s network design choices were suitable for achieving this property, and no fundamental scaling improvement is possible on a base layer. Bitcoin is scaling through an economically optimized scaling solution called the Lightning Network.
Proof of Work supports censor resistance by allowing anyone with computing power to serve as a ledger writer. Proof of Stake is untenable for creating a censor resistant network with a hard cap token supply because only those who own the underlying token can write to the ledger, making the problem of centralization unavoidable.
Introduction
Many new entrants to the crypto space find themselves drawn to various projects, each of which make promises about being superior to Bitcoin in some fundamental way. Of course, such claims are necessary to the life of any such project, or their raison d’être would be null from the get-go. This article explains precisely why it is indeed not possible to make a meaningful improvement of Bitcoin’s fundamental network design features.
Bitcoin is often criticized or questioned regarding its speed. “Only seven transactions per second,” they say. Most alt projects’ claims to “fundamental improvement” find their basis in a claim to “scalability,” or the ability to allow for a much greater number of transactions per second (TPS) to take place on their networks. So let’s start to dig in here, and illuminate the truth of why Bitcoin is and will remain the undisputed champion of crypto networks.
In order to understand Satoshi’s design choices, we must ask ourselves what the point of all this really is. What was Satoshi’s purpose in creating Bitcoin? What makes Bitcoin Bitcoin, or, what makes it special, or different from all other computer networks? The simple answer is that Bitcoin is censor resistant. This means that it is extremely difficult (this author would argue impossible at this stage) for any actor, including state actors, to prevent access to the network or to stop the network from functioning altogether.
The reason why Satoshi wanted to develop such a network, and why many people value the property of censor resistance, is beyond the scope of this article. But suffice it to say that censor resistance is the only reason Bitcoin is really special in any way. Take away that one property, and the rest becomes a moot point. We can say that censor resistance is really the only interesting feature of Bitcoin, the only thing that makes us care at all. Let’s now take a look at how Satoshi cracked the censor resistance problem.
Achieving Censor Resistance
Satoshi realized that the key to censor resistance lay in enabling anyone who wanted to access the network directly to be able to do so — to verify for themselves the information that was being transmitted across the network. This includes, importantly, verification of the total supply of coins, and that no doublespends are taking place. Only by creating a system which would enable anyone who wished to verify this information for themselves, would it be possible to ensure the monetary and spending policies would stay intact.
So the problem really became a question of how to ensure that anyone who wished would be able to access such information, a prerequisite for being able to verify its accuracy. From here, we can start to see the importance of the “speed limit” Satoshi introduced to Bitcoin. By limiting the amount of information-to-be-verified that can be transmitted by the nodes operating on the network, each actor can be sure that they have a full and accurate view of the network state.
So this is really our first key point to understand: any Distributed Ledger Technology, or crypto network, if it desires to achieve the type of censor resistance envisioned by Satoshi, must have a speed limit. The reason is quite clear: if there is too much information-to-be-verified being sent around the network, then individual nodes will have a greater burden in receiving and processing that information. Either their network connection will have to be faster, they will need more processing power, they will need more storage — or a combination of all three. So by limiting the networking (and consequently the processing and storage) requirements, Satoshi sought to maximize the amount of people who would be able to directly participate in the network — thus bolstering censor resistance.
Let’s look just a bit closer at what happens if we decide to “scale” layer 1 of any crypto network. Indeed, the first step in designing any DLT is to choose a bandwidth limit. In Bitcoin, this limit is approximately 2.5 megabytes every 10 minutes. If one wanted to increase this speed limit, one could choose to increase either the size of the block, or the block frequency, or both. So you could, for example, choose 25 megabyte blocks every 10 minutes, or 2.5 megabyte blocks every minute. These are, for our purposes, equivalent speed limits. Or you could choose 1 gigabyte blocks every second. Sounds great right? Think of all the information-to-be-verified that can fit in such large and frequent blocks! Such a network would have a maximum throughput that would be very high compared to Bitcoin. But would such a network be censor resistant? The answer is that as you scale the networking requirements, you necessarily raise the requirements (costs) of participation. If you require a 1 gigabit internet connection, it is clear that most people will not be able to participate directly in verifying network activity.
As you increase the cost of running a node, you effectively begin to compromise the censor resistance property of the network. You are in effect decreasing decentralization. Another way to think about this is to think of Google’s network model. Google is able to process an enormous amount of information from around the world, but it is extremely centralized. No individuals are able to run a “Google node” at their home in order to verify the work being done on Google’s network due to the extreme cost of running such a node.
The Economics of Possible Alternative Speed Limits
Now that we understand the role of speed limits in achieving censor resistance, let’s take a closer look at Satoshi’s specific design choice, and ask ourselves whether or not Satoshi chose an acceptable speed limit. Would it be possible to choose a different speed limit, one that both scales a layer 1 crypto network in a meaningful way, and does so without compromising censor resistance? To answer this question, we need to explore the economics of the block space market, or the market for scarce block space that develops when crypto networks become congested.
Let’s begin by asking the question, “What kind of speed limit might a designer choose that would provide a meaningful throughput benefit over Bitcoin’s current offering?” This author argues that there is no speed limit that would provide a meaningful benefit. The reason is this: Bitcoin aims to be a settlement layer for the global economy. With eight billion people plus computers, and every other possible machine economic actor out there, there is simply no speed limit that can be chosen that would allow a significant proportion of the world’s economic transactions to take place on a layer 1 crypto network while maintaining censor resistance.
In other words: There is no way that you, at your own home, will be able to process all the world’s economic exchanges. I hope this point is clear. Yes, it may be possible to provide some marginal benefit in terms of increasing the TPS to 100, 1,000, or even 10,000 TPS on the base layer of a global crypto network. But such speeds would not be meaningful in the sense that they would still be unable to provide all the world’s economic actors simultaneous access to that base layer. Ten thousand TPS, compared to the total amount of transactions taking place around the world at any given moment, is essentially nothing. From the perspective of a user of such a network operating at global scale, it is the functional equivalent of Bitcoin’s infamous “seven TPS.”
We can extend this analysis by looking at the total fees paid to access Bitcoin versus our hypothetical super-fast network. To think about this issue, we must first realize that both networks offer only extremely scarce block space relative to the total demand for scarce block space in a global adoption scenario. Economically speaking, the only difference between these two cases is that the per transaction fees would be higher in Bitcoin with its 1 megabyte every 10 minutes speed limit than our hypothetical ten thousand TPS network — but the block space market would exist in either case. That is to say, when operating at global scale, both networks would find themselves congested.
My personal speculation is that the size of the two markets would be relatively close, since fees are a function of the scarcity of the block space relative to the total demand for the available block space. We can approach this question from another perspective: in theory, if you could fit the entire sum total of humanity’s economic interactions into layer 1, then yes, such a network would be feeless. But as we have seen, such a network would not be censor resistant.
The argument I am putting forth here is that any potential throughput goal you might choose while still having a reasonable shot at maintaining censor resistance would necessarily generate a similar size block space market as Bitcoin does in its current form. So it’s a wash! This is why Satoshi’s early decisions are so sticky. Not only do they work, they are “good enough” relative to any other possible network design choices that he or someone else could have made or make today. The fact that no one has come along with a Bitcoin fork that eliminated the block space market while protecting censor resistance serves as empirical evidence of this theoretical conjecture. Indeed, the fork war of 2017 demonstrates that Bitcoin users value censor resistance above all else, with the understanding that fees are a necessary (and desirable!) aspect of Bitcoin’s security.
I now turn to a discussion of sharding, to further illuminate the dynamics of Bitcoin’s block space market, and why “seven TPS” really is enough.
The Economic Limits of Sharding
Most commonly, people talk about sharding in the context of scaling layer 1. In this context, sharding is the idea that you “split” a crypto network so that any given individual node only processes only a certain subset of the total amount of information passed around the network. The key idea is that there is always a bit of overlap between nodes in the specific sets of information they are required to process, ensuring that no one is able to cheat — because if someone cheats, their “neighbors” will see it, and reject the fallacious transactions.
This sounds nice, but if you look closely at this vision of sharding, you will notice that it does not offer truly “unlimited” scaling. The reason for this is economic in nature. As you want to travel further across the shard space, you will be required to pay “cross-shard routing fees.” Since the intermediary node operators have their own speed limit requirements to think about, getting priority to send your transaction across a shard space in a congested network will have an associated cost. The size of the shard space accessible by any single actor is thus limited by the cost of this movement. The further you wish to travel, the greater proportion of your transaction is eating up by such travel fees. In theory, one could create a “boundless” shard space, but one must still contend with the fact that the entire space would not be reachable by all users, due to the economic limitation I have outlined here. So in the end, a layer 1 sharded network results in the network hard forking into separate economic localities. All semblance of global consensus is lost.
Economically Optimized Scaling
Now I turn to the Lightning Network (LN), which normally is discussed as a “layer 2” scaling solution for Bitcoin. In fact, LN can be thought of as a form of sharding! In essence, instead of establishing “rigid” shards with boundaries and defined sets of nodes and information-to-verify, users choose for themselves which other economic actors they wish to interact with frequently, and then those two users monitor their specific “channel” on the Lightning Network. The “shard space” established within the Lightning Network grows organically according to the most efficient possible use of resources. Rather than forcing nodes to interact with neighbors with whom they may rarely conduct business (as occurs in rigid layer 1 sharding solutions), users are free to open channels with anyone they wish, according to their economic desires. This form of sharding has the added benefit of being done “away” from the global consensus layer, thus allowing network participants to never lose their global view of the network. Unlike layer 1 sharding solutions, which compromise global view and thus introduce risk, LNP/BP (Lighting Network Protocol/Bitcoin Protocol) users can always verify that the layer 1 Bitcoin consensus rules are being followed as expected. Global economic reach is always ensured through layer 1 interaction.
So just as layer 1 sharding solutions have travel fees, so too does LN have routing fees. Now we have a layer 1 block space market, and we have an LN routing market. It is useful to explore the interplay between these two markets. In LNP/BP, if one is moving a large block of funds, one may choose to do so on-chain. But as the blocks become crowded and fees rise, one may choose to move more and more of one’s economic activity to LN, thus reducing fee pressure on the base layer. Consequently, as demand for scarce layer 1 block space decreases, so too do the fees required to access it. So you end up with a natural equilibrium where fees ebb and flow between the two markets according to user determination of the most efficient use of his resources.
Enforcing Speed Limits
So now we move on to our final “back to basics” issue which is pertinent to the present discussion. We have already shown that scaling layer 1 is neither desirable nor possible, and that Bitcoin’s design choices are sufficient. But what about Satoshi’s chosen Sybil protection mechanism, Proof of Work? First: What is a Sybil protection mechanism? A Sybil protection mechanism is the thing that actually ensures the speed limit is followed!
There has to be some way to actually limit the amount of information being passed around the network, or the speed limit is meaningless. Here, Satoshi was presented with a problem that was familiar to many in his circle of contemporaries. Previous (and also subsequent) attempts to provide Sybil protection all revolved around trusted identities (e.g. David Chaum’s ecash). We might simply choose one person to collect all the transaction information from the users who wish to send a transaction. That person then decides on which more limited set of information should be sent back out to all the users for verification. Of course such a process isn’t decentralized at all. You may think, “Well we just have to expand the set of signers so there isn’t a single point of failure.” This may sound like an improvement, but again, the solution is ultimately a trusted one, and therefore, it is not censor resistant.
The reason such a solution is trusted is because it requires identities, which must be incorporated internal to the network. This could take the form of a hard-coded list of public keys, for example. These identities signing the blocks exist inside the network, in a sense — and given their limited physical number, it is not hard to see that censorship is a real risk. A censor could take over the network identity of a signer, and users would be none the wiser.
The key to understanding Sybil protection mechanisms is that in order to limit who is able to write to the ledger, they require the “ledger writer selection process” to be tied to some scarce resource. In Proof of Stake, this scarce resource is identity. In Proof of Work, it is energy. Without a link to a scarce resource, there is simply no way to achieve Sybil protection.
So Satoshi’s solution to creating a trustless Sybil protection mechanism was to move the selection process wholly outside of the network. By selection process, I am referring to the act of selecting which information will be forwarded to all nodes for verification. Another way to think about this would be to say that Satoshi eliminated identity altogether from Bitcoin’s Sybil protection mechanism.
The true genius of Proof of Work lies in tying the selection process to energy, itself a scarce resource, but one that exists wholly outside of the crypto network. Through the Proof of Work requirement, the mining process is “black boxed.” Proof of Work ensures that anyone who has computational power has equal access to serving as what we might refer to as a “ledger writer,” or someone who is able to look at the large set of information being passed around the network (as stored in the mempool, in Bitcoin parlance), decide how to parse it down (normally by choosing the transactions with the highest fees), and then pass that smaller set of information (the block) around the network for verification.
Staking and the Centralization of Power
So what’s the problem with Proof of Stake in current, well-known implementations? There are many problems, of course. Andrew Poelstra describes the fundamental “nothing at stake” problem very well. But for the purposes of the present discussion, I want to focus on the economics of staking and its impact on the centralization of power over time.
In essence, what contemporary PoS solutions have done is taken the “trusted identity” aspect of all staking systems and distributed it over the token holders themselves. And in theory this sounds nice, since there may be thousands of token holders who hold “stake” and take place in “ledger writing.” But in practice, here’s the rub: unlike PoW which is open to all who have computational power (waste energy is abundant around the world), PoS is a closed system — open only to those with tokens. And since the more tokens you own, the more rewards you get, the system is centralizing.
Technically, staking systems need not be centralizing, but only in the specific case that they permit perpetual inflation. With inflation, new coins could be distributed to identities that we may hope are not controlled by existing stakers. (But that hope is clearly problematic in and of itself.) If we introduce a token hard cap, however, the fact of centralization becomes very clear: existing stakers reap rewards proportional to their stake, meaning that the larger one’s stake, the larger one’s rewards as a proportion of the total, a situation which leads ultimately to oligopolistic or monopolistic control of all the stake and rewards. So without inflation, there is an ever-present pressure toward centralization. Moreover, this oligopoly or single user issuing the newly minted tokens would set the Cantillon effect into motion.
Some alt projects have devised fancy mechanisms that propose to mollify this problem. Usually these revolve around voting for various ledger writers to avoid giving all the rewards to those “at the top.” But in the end, such mechanisms only obfuscate the root problem and at best delay the inevitable. Splitting and pooling are two strategies which stakers can use to circumvent attempts to control how many rewards a given staker can receive as a proportion of the total. Splitting is when a single user splits his stake across multiple network identities and pooling is when multiple users band together and pool their resources to reap additional staking rewards. These strategies effectively render all staking reward schemes linear in nature, and show why stake centralizes over time. Again, this is why Proof of Work is necessary for censor resistance — it eliminates the identities from the network altogether — thus turning Sybil protection into a game whose Pareto outcome is determined by shared incentives rather than opposing ones.
Conclusion
So there you have it. In this article, I have laid out the theoretical underpinnings of Bitcoin’s censor resistance from a networking perspective. Together, Satoshi’s solution of making Bitcoin slow, the identity-less Sybil protection mechanism of Proof of Work, along with a robust set of social incentives, function to support Bitcoin’s censor resistance. There are certainly many more facets to the design and functioning of crypto networks than have been discussed here, but my purpose in this article was to drill down on the most essential aspects of what makes Bitcoin tick from a networking perspective. I hope that this discussion has given readers a clearer understanding of why no competitor has any hope of supplanting Bitcoin as the dominant crypto network. Yes, Bitcoin’s base layer is slow, but it is slow for a reason!
Please note that I have focused only on the networking side of Bitcoin, and have not touched on the monetary theory of why Bitcoin is a good money, nor the impact of its monetary network effect on competing monies and tokens. Enough has probably been said about that by others — but suffice it to say that the “social side” of Bitcoin adoption is also very interesting and an essential element of Bitcoin’s success story.
I have also not discussed the economics around dominant and minority PoW chains, a question which may have been on some readers’ minds. Briefly, my comment is that because Bitcoin has the monetary network effect, it will garner a greater share of the total PoW economy over time. Other PoW based crypto networks may garner some hash in the short term due to hype cycles, but without the monetary network effect (money is a winner take most game) they are vulnerable to the long term security risk of inadequate hash power to maintain network security.
I also believe that channel factories are an essential technical element for scaling the Lightning Network, a development which is sure to come in time.
In summary, no crypto network is able to make a credible claim to have improved Bitcoin’s design in any meaningful or fundamental way. There is no set of tradeoffs that could provide adequate censor resistance while also scaling layer 1 in any meaningful way. At the root, any Sybil solutions that rely on identity cannot promise a hard cap coin supply without giving up decentralization over the long run. In conclusion, we can see that Satoshi made suitable and acceptable choices in his initial design. No one would argue they were perfect or ideal, but they were choices that were good enough to get Bitcoin bootstrapped and bring it to where it is today.
Onward toward our beautiful Bitcoin future!
Thanks to Max Hillebrand for his helpful comments, which helped refine some of the arguments presented in this article.