Sharding is where a large database is broken up into many smaller databases so nodes only process a part of the ledger rather than the whole thing. Many DLTs utilize sharding to increase the speed and scalability of the network as it is less intensive to process 1/100th of a DLT than in its entirety. This means the network benefits because:
- It increases throughput, as different nodes can process different transactions and so more transactions can be processed at any given time (which is necessary to meet growing demand from users, even at this early stage of DLT adoption)
- By lowering the resource intensity of being a node, more nodes can participate and we are less likely to end up with a centralized system where users must rely on a small set of ‘full’ nodes who are the only ones able to afford to store the full ledger.
However, sharding comes with drawbacks and must be implemented in a way that does not open new attack vectors or compromise the integrity of the ledger. A shard with 1/100th of the total hash power of the network devoted to it only necessitates 1/100th of the hash power to attack it, increasing the chances of a 51% attack. Networks must avoid such issues while steering clear of centralized solutions such as master nodes or coordinating authorities.
How and why Radix shards
Sharding from Day 1 to meet future demand
Most current sharding implementations start with a single vertical structure (e.g. a blockchain) and shard it as the network gets bigger. They do so in order to make validating less resource intensive which essentially adds additional processing power. This means that the network changes from one whole to two halves to four quarters and so forth.
Splitting the ledger up like this means you need a method to deal with transactions between shards, as simply sharding does not mean nodes can process different shards in parallel. This limits the potential speeds that can be reached. To process transactions in parallel without risking double spends, nodes must either have an up to date version of the global state of the ledger (which negates scalability gains and increases resources required to run a node) or there has to be a central authority dictating to nodes which transactions conflict.
Radix, however, is fully sharded from the beginning. It will launch with the maximum amount of 18.4 quintillion shards already existing. This is so we can meet the scale a globally used platform will eventually need. Each shard would be able to process 2,000 tps a second, or a total hypothetical throughput of 36.8 sextillion tps at full capacity. Each node maintains as many shards as it can, dropping shards too heavy for it so even low-powered devices to participate. To ensure high availability, the system pulls nodes into underserved shards.
If each shard were to hold 0.5 bytes then the network could hold what Randall Munro estimated in 2014 to be the entirety of Google’s data, 10 exabytes. In reality, of course, we would not need to store anywhere near this amount - Google is storing vast quantities of videos and other storage intensive media files, whereas Radix will be storing simpler text based information such as transaction histories.
However, this level of sharding from the start ensures there is the capability to meet both the future throughput and storage needs needed for a global scaled solution without resorting to solutions likely to increase centralization.
Deterministic data
Instead of starting with the single vertical structure, Radix uses a horizontal data architecture which can be easily indexed.
Why does this indexing matter? Well, imagine you have a very large book collection and every time you buy a new book you have to reindex the entire shelf because something has changed. With the amount of data being processed by DLTs, however, a more apt analogy would actually be to imagine you had to store every book ever created but also know where every word from each of these books were. And every time a new book was added, every word from every book ever would be reorganized but you would have to (instantly) keep track of the new order.
This would be time consuming, to say the least.Instead now picture there was a pre-existing slot for every word of every book so that when you bought a new book you would already know where it was going to go, with all your other words remaining in the same place. Would this not be much simpler to keep track of?
This is essentially what Radix achieves by using a pre-sharded structure. Because the data architecture is split into shards from the start we know how the network will be cut up and indexed. Over time, as transactions and activity occurs, data will be added and will occupy a new part of shard space. By using key reference fields (such as public keys) we will always be able to find that piece of data and know where it will reside in perpetuity. This means that nodes aren’t forced to update their entire ledger every time a new transaction is made, but rather that they just add the newest transaction.
This is achieved through a deterministic process, a process which will always output the same answer given the set of inputs. This process is important, because it means we are working off of a data structure which can scale linearly.
Grouping transactions to increase throughput
Another problem relates to transaction throughput. No matter how sharded the network, if we have to process transactions together then we lose the ability for parallel throughput.
Radix addresses this in how it handles grouping transactions. Your public key determines the shard your wallet lives on. You can therefore only spend from that shard. If I make a transaction to Bob and Alice from the same wallet then these would be transacted from the same shard and would be verified to make sure that I have not double spent. As such it makes it easy to spot dishonest actors as double spends will only occur on the same shard.
In the same vein, the data architecture ensures two transactions on two different shards are in no way related. If I send to Alice and Bob from two different accounts then they cannot be related and therefore there is no need to make sure the transactions don’t conflict - they are already provably unrelated. This ensures that unrelated transactions are asynchronous, meaning the 18.4 quintillion shards can process transactions simultaneously.