Every other week, Radix DLT Founder Dan Hughes hosts a technical Ask Me Anything (AMA) session on the main Radix Telegram Channel. Here members of the community have a chance to get insights from Dan himself on Cerberus, Cassandra, and other key technical innovations of Radix.
Thank you to everyone who submitted questions and joined our AMA. You can read the full transcript below from the session on May 11th 2021.
I think this has been asked before, but I can't find an answer in detail: Dan, what happens if you die tomorrow? I mean: what's the realistic impact of you not working at Radix DLT anymore? Can the team continue "with just one person less", would that move "Lambo for everybody by 2023" to "Lambo for everybody not before 2030" or "all XRD in existence is not enough to even buy a Lambo rearview mirror ever again"?
Ok, so basically I'm working on Xi'an atm ... that is in part what Cassandra is for, to blaze the trail and define what needs to be done.
Everything I'm doing on Cassandra is documented, source code is online, I've had many conversations, etc.
If I was to die tomorrow (hopefully not) there is enough in Cassandra to provide a decent blueprint and the team is smart enough to fill in the gaps where I didn't get to yet.
You really shouldn't be worried ... but please at least be a little sad if that were to happen.
How does Radix differ from Internet Computer Protocol (Dfinity)?
Watch this space ... or more so my Twitch later.
(Full stream answering this question is now live on Dan’s Twitch Here)
On the usage of shards with state transitions. How does Cerberus deal with the risk of double usage of a shard with massive (independent) parallelization of processing of transactions? I know that the shard space is insanely large to reduce this possibility.
Does each validator set reserve shards that can be used to store a new state in?
If they have a reserved piece of the shard space to deal with state transitions then overlaps will not happen. Is this reserved space (if there is one) fixed or is it dynamic?
Example: Validator set 1 (vs1) services shard group1 (sg1)
vs2 services sg2
> State transitions occur in a shard of sg1 and at the same time in parallel as well for a shard of sg2
Both new states are stored in unused shards that were already under the wings of the respective shard groups, right?
Where the state is stored is defined by the state itself as it's wholly deterministic in nature.
If there is an execution of some state output, then the hash of that state output determines what shard it is in and which shard group is responsible for it.
If validators were to "reserve" shards for things, then you also need an index of where things live ... which adds lots of bloat and complexity.
The number of available shards is such a large number it's almost impossible to quantify or visualize ... basically you don't need to worry about overlap or reuse, and in the event that it does ever happen the solution is to simply fail the transaction.
* If we were to take all the matter in the universe and create hard drives out of them (maybe leaving a few stars to power them) ... you would not be able to store all 2^256 numbers on those hard discs.
I think it is true that you will always need some maximum number of validators per shard group. Eventually, this maximum may be more than 100, but some limit will always be required (if only to reduce the storage/CPU requirements for the once per epoch software).
Do you agree? Or does your hybrid combinatorial POW being tested in Cassandra make possible an unlimited number of validators per shard group?
You can never obtain unlimited validators as there is always some overhead. Even the most efficient gossip, compact signature scheme has some overhead component that is a log.
The desire is to be able to increase the possible number of validators per shard group by at least an order of magnitude with minimal side effects on messaging or compute complexity.
That appears to be the case with the BLS signature implementation and the compact state vote representations I have in Cassandra.
As for the quantity, there's still work to do, but looking like it'll land around 1-3k before those loggy overheads get detrimental.
On collecting dev incentives
In the interview with Scott Melker, Piers stated that the team was banging their heads on the problem of how to collect tx fees across shards
It was decided that the incentives could come from network emission and the tx fees burned 100%.
What's the difference between collecting tx fees across shards and collecting dev royalties?
Is it easier to collect dev royalties? How are they different?
I've had MANY a conversation about this with Piers and the team and it is indeed an annoying issue!
Let's do the dev incentives first.
I create a component and deploy it ... the blueprint for that component lives in a particular shard and always will. Any calls to that component always know where to go to call it, and within that component is the wallet address of the creator, which again, stays static.
Therefore it's super easy to add another state execution in the transaction that takes some of the fee and sends it to that wallet address. Because the fee is always inbound, you can do them in parallel too, even though it's technically a write operation.
For the network incentives, it's a different story.
Transactions are happening in all shards, which in the crudest sense you have to keep a tally of how much fee spend there is. Then after some duration, you take that tally and split it between all the validators in the network serving the shards.
The problem is that there are many multiple inputs and outputs, that no one will have a full view of (it's sharded) ... from S1 I can't see if the validators in S0 have tallied correctly unless I can see all of S0. Fees and distribution become a global state problem, and you don't want any of that in a sharded environment.
You could say ... well just look at the fees for a shard group and split them between the validators in that shard. Sure, that works great if all the shards are equally balanced, but if a developer deploys a component that is super popular, some validators in some shard group are going to be involved in more transactions and collect more fees. Other validators in other shards may try to forcibly exit their shard group and enter that one to capitalize on the increased fees, leaving the other shard groups less secure.
Too many ifs, buts ... just burn the fees and have a network-wide incentives emission based on stake ... problem solved.
Here is a tweak that should be implemented for Olympia, since it will be more complicated to implement it later. Let the number of shards change from 2^256 to 2^256 - 256. This is a bit of future-proofing that allows for the (conceivable but admittedly unlikely) future need for either more shard space or for some specialized shard-like data structures.
The tweak is a few lines of code:
If a deterministically generated shard address has a first byte of b'1111111, then keep trying until the generated shard address does not have this property.
This has the effect of excluding a submicroscopic amount of shard space from being used for "real" shard addresses, and reserved for a future need that may never occur. If such a need is ever identified, it prevents any existing data from already being stored on the shard.
While I'm not seeing the net benefit of this idea, it's also impractical.
If I have some state that is in the UP state for 10XRD ... the DOWN state for that 10XRD when I spend it is a product of the hash of the UP state.
I can't just "try again" to get a shard space address that I would like because doing so would violate the state transition model... additional information would then be needed in the DOWN state to say how many times I retired from the deterministic seed of the UP input.
Plus ... state hashes are then malleable ... I can create a DOWN state that "lives" in S0 with 10 attempts ... and a DOWN state that "lives" in S1 with 20 attempts ... that probably breaks safety really fast.
On the importance of different node clients.
What's the team's position on the role of different clients?
Any time scale when it would be possible/optimal for these to enter the game?
I'm in the middle of the road between Satoshi and Vitalik's arguments.
I think multiple clients can be bad at the wrong time and good at the right time. How you decide the good and bad timings though I don't really know ... thoughts anyone?
I just stumbled across https://lwn.net/Articles/525459/ and am asking for reassurance that you are aware of the pitfalls of pseudo-random number generators that I had been previously unaware of. For example, the deterministic process for mapping into a shard address must have an important property of "randomness". Specifically, the set of possible values for the result of the shard# calculation must have 2^256 distinct and equally likely values.
The first pseudo-random number generator I ever used took the least significant bits of a timestamp expressed as a U32, squared it, extracted the middle 32 bits of the U64 result, and then repeated the square/extraction 100 times.
The result looked and felt random to me. But it never occurred to me that perhaps only a subset of the 2^32 possible results could happen, and each of those was not NECESSARILY equally likely.
Such a possibility was not crucial to my application back then, but it would be crucial to a
pre-sharded DLT like Radix.
So my question is, has your random number generator been peer-reviewed for whether or not it satisfies the necessary "randomness"?
I think there are two types of "randomness" you're referring to here.
In the strict randomness sense, you don't want any with determinism ... that's the point of determinism, everyone has the same output from the same input ... randomness breaks that.
I think you're referring to the entropy, more specifically one-way functions such as SHA-256 as their output "looks random".
Strong one-way hash functions don't use pseudo-number generators or anything, they rely on the fact that they are small in size vs what is being input to them and any output has LOTS of possible inputs.
Think about a mod%12 ... if you say to me "how many hours have you worked in the office this week?" And I respond with "4". Do I mean 4 hrs? 16 hrs? 28 hrs? 40 hrs? No randomness at all, but you have no idea what the correct input was to my answer of 4.
In the previous AMA, it was discussed that 1000 nodes will do 1000000000 iterations of which 1000 will be the best combination but how much energy does it cost a validator to perform 1 iteration?
This is referring to the CPOW that I'm using in Cassandra
1 iteration depends on how many pending transactions are in the pool ... but generally with 10-20k transactions pending I can perform ~20 iterations per second per core
If the pool is quite empty I can per 1000s per second but I'll exhaust all the possible combinations I can find quickly and just be rediscovering the same ones over and over that are not a good enough POW.
I can't remember what pool size that discussion and example referred to btw. So the 10000000 is a bit arbitrary without remembering.
How confident are we in Cassandra being linearly scalable and very very fast?
Very ... and the community testing will start to happen soon now that reliability has been increasing steadily and the core foundation is largely free of bugs and glitches!
Why not just do Eth layer 2 scaling solutions?
Because they aren't a general scaling solution. They can do some things REALLY well, and some others REALLY bad.
Cross dApp communication in L2 is VERY complicated ... doing it atomically is, well I dunno, no one's solved it IIRC.
If you can't do that, you potentially break most De-Fi use cases, or at best make them very complicated.
I've never been a fan of L2 solutions for anything other than the simplest of operations.
That wraps it up for this session. If you’re interested in finding out more, Dan’s previous AMA’s can be found easily by searching the #AMA tag in the Radix Telegram Channel, or by reading the previous blog posts. If you would like to ask Dan a new technical question, just post a message in the chat, using the #AMA tag!
To stay up to date or learn more about Radix DLT please follow the links below.
Join the Community:
Twitter: https://twitter.com/radixdlt
Telegram: https://t.me/radix_dlt
Reddit: www.reddit.com/r/Radix/
Discord: https://discord.com/invite/WkB2USt
Radix Resources:
Podcast: https://www.radixdlt.com/podcast
Blog: https://www.radixdlt.com/blog/
YouTube: https://www.youtube.com/c/RadixDLT