Найти тему
Prizm Unification

Why is PRIZM not as fast as some cryptocurrencies?

Hi people ! ✋🏻😎 Yesterday morning, I noticed a conversation in one of the chats, which literally said the following:

There are a lot of cryptocurrencies on the market, such as our Prizm, and the latest technologies are appearing on it every day, everything is better, faster and more interesting. And either you have to at least keep up, or be better!

I decided to participate in the conversation. The conversation turned to the speed of transactions. I wanted someone to write in the chat what exactly is the main reason for the so-called transaction speed and why some other cryptocurrencies show this speed higher.

Unfortunately, only one person answered, but very generally - in one word. He wrote "Decentralization".

I will no longer torment you with stories about Prizm chats, let's better understand at what price the high transaction speed of some cryptocurrencies is achieved and why PRIZM is not so fast?

What are we distributing on the blockchain network?

Consider the simplified mechanics of the appearance of a transaction in a block.

At the first stage, the transaction appears on one of the network nodes. For example, Heiney sends Dan a certain amount of coins in exchange for delivering a "hot lunch". This does not impose any obligations on any of the parties to the transaction and does not mean that the payment is somehow fixed on the network. So far, this is just a deal announcement.

In order for a transaction to get into a block, the node generating this block must know about this transaction. But since the block will generate a node randomly selected from all nodes of the network, we have to distribute our transaction among all.

So, first of all, the transaction is distributed in accordance with the protocol prescribed in the source code.

Next, the network chooses a "leader". This is the node that will generate the block and write our transaction into it. After that, he will start sending his block to the network nodes available to him.

Now let's take a closer look at this whole process. 

-2

Let's immediately look at the classic simplified way of allocating blocks. Any protocol for the interaction of network nodes begins with some preliminary steps.

First, “node 1” invites “node 2” to exchange. By "node 2" one should actually understand several network nodes available to "node 1" (peers), but for simplicity we will consider interaction with one node. 

Then "node 2" responds and requests "node 1" to receive the block.

Then "node 1" sends its block to "node 2". 

The next action "node 2" starts the verification procedure for this block. The time it takes to validate is linear with the block size as it involves validating each transaction. Accordingly, the more transactions in the block, the longer the block will be checked.

When we talked about propagating a transaction so that all nodes in the network knew about it, these nodes not only thoughtlessly passed it to each other, but also performed validation. Validation consists in verifying the correctness of the signature. And this check also takes some time.

In the classical version of the protocol, the node does not start accessing the next node (propagating the block further) until the end of the full check. That is, in fact, there is a duplication of previously performed actions - the verification of transactions that ideally should have been performed during their distribution. 

From the point of view of "node 2" that received the block, it makes sense to validate transactions in a new way, because there is a possibility that "node 1" somehow changed all transactions. To prevent changes, "node 2" checks them again.

Problems that arise

This classic approach to propagating blocks to the very last node in the network introduces some latency. The main factor in this delay is not associated with any aspect of the network, nor with any network architecture, although it also occurs in some individual cases. The delay is mainly related to the time it takes to check and we understand that this check is repeated.

Thus, it turns out that approximately 10% of all network nodes receive a block in the first two seconds, but the speed of propagation of the same block to other nodes is constantly increasing and can exceed a minute.

In fact, the speed of transaction propagation does not look better, and maybe even worse. The time it takes for some transactions to move from one "end" of the network to the other is measured in minutes. Such a slow transaction propagation speed allows you to do something like double spending, that is, the ability to deceive the recipient of the transfer.

Let's apply this manipulation in our example: At the moment when Heiney makes a transfer to Dan, at the same time Dmitry, who is in another place, makes a transfer of the same coins, but to another recipient (or himself). That is, we have two conflicting transactions, because they contradict each other.

Heiney sends the transaction to the part of the network where Dan would see it as soon as possible and thought that the transaction would already remain in it forever and gave away his "hot lunch". With a hot lunch, this is of course a gaming option, but if we talk about some kind of online trade in which some kind of digital product is delivered, then this speed can significantly affect it.

In the same nanosecond, Dmitry sends a second transaction in that part of the network where the "density" of nodes is much higher, due to which this transaction appears faster in most nodes of the network. And when Heiney's conflicting transaction reaches them, they'll reject it (it won't pass validation). Under the "network density" we will conditionally consider not just the saturation of the nodes themselves, but the saturation in terms of computing power. This means that the probability that the next block will be generated from here is much higher.

As a result, a block will appear in this “dense” part, in which Dmitry’s transaction will fall, and it will spread to Heiney as slowly as his transaction, reaching Dan, who by that time had already transferred the “hot lunch” to Heiney (Well , we, of course, understand the analogy with a digital product) 

Therefore, a large number of studies using various optimization mechanisms have been aimed at reducing the delay in the propagation of transactions through the network.

Suggested Solutions

One solution is to decompose the verification process so that, without waiting for it to complete, start sending blocks further across the network. 

The verification process is divided into several parts, the first of which is a complexity check, after which transactions are verified. At some point in time, a situation is possible when the network is divided into two unrelated subnets, each of which will begin to live its own life, and in each of them blocks of a different (not necessarily) amount of power will be generated. In one, on top of the block containing the Heiney transaction. In the other, on top of the block with Dmitry's transaction.

After some time, it will turn out that the base goal for block generators in one of the subnets will be higher than in the other. And one day, at some node, blocks with different accumulated cumulative complexity will “meet”, which will be able to easily and quickly determine which of the block chains has accumulated more complexity. The cumulative complexity of a chain reflects the computational power expended in computing the block hashes of that chain.

Thus, this node, having checked the complexity, can already start inviting the next node for an exchange, and then start checking transactions, since they have already been verified by someone earlier, so this is not so critical

Another solution is that “node 2”, which received an exchange invitation from “node 1”, along with sending a request to “node 1” to receive a block, immediately starts sending an exchange invitation to its peers (available nodes). . Having received a request for a block from them, "node 2" puts them in a certain queue, and after checking the complexity of the block that came from "node 1", it immediately sends the block further in turn.

And the third approach to solving the problem of delay is to make a star topology.

-3

This is such a server classical architecture of node interaction, when there is a certain central node through which the exchange takes place. It doesn’t matter how long it takes to fully verify a block before sending it, the very fact that there are only two transfers from one node to another allows you to significantly speed up the transfer of blocks.

"latest technology, better, faster and more interesting"

Undoubtedly and obviously, the third approach uses a centralized network mechanism for interaction between nodes. And this, of course, allows you to transfer data faster than with the classical approach..

To further reduce transfer times, some "latest technologies" create artificial metrics by ranking nodes in a certain way. For example: granting only certain nodes the right to validate blocks or only validate transactions. Or some nodes are given the right to only generate blocks, and some nodes are given the right to mix different approaches depending on the circumstances.

The circumstance is the fact that users need to delegate coins to such "select" nodes, because without gaining some "weight" such nodes will not be able to perform their duties. Such nodes actually become central agents that the blockchain must get rid of.

The problems of such networks will be much more obvious if all nodes are endowed with the same properties as those of the "chosen ones", making everyone equal, that is, making the network peer-to-peer.

After that, at best, it (the network) will begin to divide into subnets faster than it validates its new blocks, that is, it will not be able to synchronize correctly and, as a result, will reach complete desynchronization.

It is in order to hide these problems and make everything work quickly and smoothly that these “latest technologies” are forced to neglect the most valuable thing, namely decentralization. The number of such "selected" nodes in some networks does not exceed two hundred! Because if there are too many of them, then the network will no longer be able to show its artificially created capabilities.

But the “chosen ones” are responsible for the operation of the entire network ! If someone conditionally negatively affects more than half of these nodes (and in some cases just over 30% is sufficient), consider that it will affect the entire network. Here we can agree that many people find something interesting in the operation of this type of network, and perhaps they are not bad for individual projects or companies, but will such networks be secure enough on a global scale?

-4

PRIZM does not create such "chosen" validators, so the information transfer rate for the most distant nodes is not high. But it is fair to say that such a speed for a peer-to-peer network can hardly be called low, especially since it is not so easy to observe it under load, due to a deterministic algorithm that dynamically adjusts the base target for generating the next block, which allows your transactions be irreversible, on average, after 10 minutes. And this happens in a peer-to-peer network, without artificial tricks with nodes.

Well, if you are only interested in speed, then you can buy shares of Visa and MasterCard. The technology, of course, is not the first freshness, but it is probably better, because the whole world belongs to it, it is faster - kilograms of money have been invested in servers and this is very interesting, because you can never predict the fate of your money from a high point of view.

P.s. The process of node interaction and validation in PRIZM can be found through the sources on prizm.space in the class ProcessBlock, as well as in the class BlockchainProcessorImpl in the implementation of the method processPeerBlock()