Two nomenclatures to first clear up;Blockchain the categoryBlockchain the structureA blockchain is a DAG.Directed (moves in one direction only) Acyclic (you can't return to a node from the current node) GraphAll blockchain (structures) are DAGs. A DAG can be a blockchain (category)That out of the way, there are lots of DAGs already; ethereum, even has leaf nodes (uncles).So I don't get the DAG vs Blockchain argument.Also, when looking at DAG, what implementation?You get storage DAG (like Avalanche, where the UTXO are stored in DAGs) You get block DAG (ghost / spectre/ phantom) You get consensus DAG (Fantom / Hashgraph and derivatives, where the output can still be a normal blockchain (structure) or it could be another DAG (structure)There are account level DAGs (Nano) and transaction level DAGs (IOTA).So you can't really say DAG vs blockchain.But what I will infer, is the "Are DAGs more scalable than blockchains". And that answer is entirely dependent on your network graph and consensus architecture.In a POA / permissioned / consortium solution, DAGs are definitely more scalable. In a completely unpermissioned environment blockchains are more secure.It's a very difficult question to answer because there are so many variables at play.Personally, I think DAG's are fantastic for aBFT consensus, so I like them there, but I think traditional blockchain is the better structure for time based sequences. So most of my research work is focused on how to order a DAG to output a blockchainSo how do a lot of these other projects achieve high TPS? Not as a function of the output structure, but instead as a function of consensus.An easy example, let's take Ethereum right now. Currently it's Proof of Work at ~13s block time @ 9 TPS, but what if we simply decrease the block time to 1s? Would we then get ~117 TPS? (Here we need to look at data transmission, I will cover that a bit later).Next let's look at consensus, if we change from PoW to PoS and keep the block time the same, does anything change? No, since proof of work is just the security, not the scalability. If we change block size (or gas limit in ethereum) or block time then we can achieve more throughput.So what options do we have to achieve more throughput? Let's look at traditional technology scaling, if you have a single threaded process that does 1 action per second, and you want to increase it, you simply add more threads, 10 threads equals 10 actions per second. But what if all of these threads at some point need to share the same data? This is called a dead lock, where the system needs to make a decision. Normally fairly easily resolved in centralized systems, first come first serve for example (but here you don't have any malicious actors).So the problem we are trying to solve is one of ordering, which event happened first? This is done in proof of work based systems because the proof of work is essentially a timer that makes sure to only allow events to trickle in and not cause a deadlock.But let's assume we could get rid off that time limit, but still achieve scalability, you could do this via standard BFT, but that increases message complexity (since you need at least 3 rounds of messages x the amount of participating nodes and the delays on messages can be unbounded). So a lot of systems use BFT but they limit the amount of validators (for example only selecting 9/12/21/... validators out of thousands - dPoS with BFT).Another solution is sharding, an easy example is namespace sharding, group 1 handles all accounts that start with the letter A, group 2 handles B and so forth. Now you could essentially have one PoW network that handles all the A accounts and another all the B accounts etc. You could also do this based on some namespace, for example ETH could have a consensus layer per ERC20 (all separated for parallel / concurrent throughput).So it's less blockchain vs DAG and more storage / consensus / abstraction considerations.
Submitted May 04, 2019 at 10:28AM
No comments:
Post a Comment