Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition

HodlX guest post Submit your message

Blockchain’s scalability problem is currently the major constraint on the mass adoption of blockchain technology. In the standard P2P blockchain design introduced by Satoshi Nakamoto, each node must process all data in the network.

However, nodes in the network often have different options. In Nakamoto’s standard design, network performance is limited by the performance of the weakest full nodes in the network.

A naive approach to scaling up a blockchain network is to limit network participation for weak nodes. In this case, the network relies only on so-called “high nodes” with wide and fast network connections that can handle a large amount of data.

However, such a network is inevitably more centralized, as the maintenance of high nodes is often more expensive. In this context, economies of scale are therefore achieved at the expense of decentralization, which is known as the most valuable feature of blockchain networks.

Researchers around the world have made several proposals to solve the scalability problem. Sharding is considered the most promising. Still, there is no common vision of how to implement sharding to find the best acceptable compromise between the network’s numerous parameters.

Projects such as Ethereum 2.0, Algorand, Cardano, Near and Zilliqa have developed their own blockchain designs based on sharding. However, all of these projects have a similar pattern in their designs. They all rely on a proof-of-stake (PoS) consensus algorithm and pseudo-random selection of validators for shard commissions.

To participate in the block validation process under the PoS sharding approach, each participant locks a certain number of coins in the bet. For example, in Ethereum 2.0, one bet of at least 32 coins equals 1 vote during a block validation round.

It is important to note that each validator can make multiple bets and collect multiple votes. As such, the user who has wagered a certain number of bets can become a validator on an equal number of different shards based on a pseudo-random election mechanism from the commission.

Some advocates of PoS sharding often combine the concept of ‘deployment’ with that of ‘validator’. I think many readers have seen catchy headlines that a given coin X testnet has attracted over 20,000 ‘validators’.

However, this estimate is not about the number of participants. It is about the number of bets. It is impossible to know who made those bets. There can be a thousand or a hundred stakeholders. It is also possible that the majority of interests are controlled by a single entity. In this case, it is clear that the network is centralized.

Therefore, the alleged labeling of the above interests of this single entity as separate validators is not only confusing and misleading, but also malicious.

Our approach is to distinguish participants from their efforts. Let’s do some calculations for illustration. Let’s assume that there is a D number of different shards in the network and some participants have an S number of bets.

Then the chance that this participant is chosen by the ideal pseudo-random function as validator with one or more votes in the relevant shard,

Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition
It is also a mathematical expectation of the function that achieves 1 if the participant is a validator, and 0 in the opposite case. The sum of these functions across all shards is the number of shards validated by the participants.

The mathematical expectation of the number of shards validated by a participant is thus given by the formula:

Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition

For example, in the Ethereum 2.0 testnet, the shard count is D 64. According to the formula, a participant who locks 44 stakes validates an average of 32 shards.

Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition
This means that this participant will manage an average of 32 shards, or exactly half of the data in the network. That participant will download and process half of the data in the network. It could be argued that half is not the whole. PoS sharding was advertised as a major breakthrough to ease the load on weak nodes in the system.

However, this is not such a big improvement and such participants will still have to deal with a large workload to maintain the system. Therefore, weak nodes will not notice the expected performance improvement.

It could be argued that it is not necessary to lock 44 shares. If the contestant has limited resources, they can lock one or two bets and process one or two shards. Unfortunately, the design of PoS sharding assumes that shard commissions are shaken every era to prevent attacks from adaptive opponents.

Adaptive opponents damage targeted nodes, for example through DDoS or eclipse attacks. Damaged nodes lose their stake due to the underlying fine and leave the commission. Ultimately, a malicious actor could take control of the entire committee. In

In the PoW system, on the other hand, the node can continue to work immediately after the attack.

That is why commission shuffling is an important part of PoS sharding. After such a reshuffle, the committees are re-elected and participants are assigned as validators to other shards.

Unfortunately, single stake participants must download the status of that shard to perform their validation tasks fairly and verify transactions. This is a fairly large amount of traffic.

Participants must know all unpublished transactions or all account balances to continue their work. The alternative is to lose the bet or become a puppet of other nodes that have the necessary data.

Let’s do some calculations. Suppose each bet is locked for approximately 180 days and each bet is chosen as a validator once a day. Note that the above formula works perfectly in this case too. We set D = 64 and S = 180.

Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition

On average, this participant will download the state on 60 of the 64 shards. That is almost the entire network. Here’s another example. Suppose the participant has locked 4 bets. Then, after 11 days, they download nearly 32 shards, which equates to half of the network status.

Nevertheless, we consider the burden of small stakeholders. The other side of the coin represents the rich stakeholders with many interests. Imagine a server with 64 processing units validating 64 shards, with each processing unit verifying their respective shard. Managing this server is a fairly simple task.

Whenever the commission is shaken again, there is no need to download or update shard statuses. It is only necessary to rearrange the keys related to the deployment between the processing units according to the election results of the committee.

Thus, an operation that is costly to small stakeholders is relatively inexpensive for a large stakeholder to run these 64 processing units on the server. I think the attentive reader understands that the above server is a full node. Under this design, those who can afford to use this entire node will save a lot more money on network traffic.

It could be argued that 60 is less than 64 and half of the state is not the entire state. Still, it’s not the long-awaited solution worth “a billion-dollar budget and 10 years of development.”

Still, small stakeholders with weak nodes need to manage a huge amount of data or a huge amount of network traffic. This requirement completely defeats the purpose of sharding scaling.

Different projects that have the purpose of implementing proof-of-stake sharding may have different shard counts, commission redistribution intervals, and bet lock time intervals.

However, for each practical set of parameters one can observe “performance below expectations”. Whenever such projects face “start-up delays”, core teams often present them as development issues. However, as I just described, they are obvious design flaws inherent in PoS sharding.

Interestingly, there was no need to implement sharding based on proof of commitment to reduce the workload of small participants. Suppose the project proposes sharding based on proof of work.

Unlike PoS designs, it offers a setting that is useful for weak nodes so they can manage their workload. In this case, all participants will be equally rewarded for their network maintenance efforts. As a result, weak nodes remain profitable.

Another advantage of PoW sharding is the absence of typical PoS problems, i.e. nothing at stake and poles. As a result, proof-of-work offers a better trade-off for scaling up than that of proo or stake.

Featured Image: Shutterstock /wirow

About the Author: TEAM BEPINKU.COM

We share trending news and latest information on Business, Technology, Entertainment, Politics, Sports, Automobiles, Education, Jobs, Health, Lifestyle, Travel and more. That's our work. We are a team led by Mahammad Sakil Ansari.
Menu