Bandwidth for a full node : Bitcoin - reddit

Want to *really* help decentralization by getting more full nodes online? Code a simple bandwidth limit option (like in any decent torrent client) in Bitcoin Core so that people can actually run nodes without ruining their connection

This is a big reason why people stop running full nodes, at least home users. I've seen multiple complaints from folks saying that it eats all their bandwidth, making their internet connection nearly unusable. The result? They stop running a full node. And who can blame them?
There's been plenty of discussion on this issue for four years (!) on the Bitcoin GitHub: https://github.com/bitcoin/bitcoin/issues/273 - I think with all this recent talk about worries of decreasing full node count, this would be a relatively non-controversial means to make running a full node much more accessible, and therefore greatly increase the number of full nodes on the network.
As it is, the user can apply supplemental bandwidth shapers and QoS rules to deal with this, e.g. https://github.com/bitcoin/bitcoin/blob/mastecontrib/qos/tc.sh, but how many people are really going to do that - or even find out that they can?
Concerns on the GitHub issue that this will slow down the network are absurd, in my opinion - for one thing, there's no real incentive to "leech" by capping upload speed well under capacity, and for another, these people who want to regulate the speeds are just going to get frustrated and shut their nodes down completely, as many folks already have. Even if they're throttled, having a great many more nodes than we do now is going to increase the network's speed, not decrease it.
I wish that I had the coding skills to write a patch for this myself, but I do not, and so I just want to try to encourage those who can to make it a higher priority. I think that the impact of this issue has been drastically underestimated by a large number of people.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 For what it's worth (which ain't much), I'll kick in a 0.050 BTC bounty towards a working, tested pull request that implements proper bandwidth limiting in Bitcoin-Core, in a style similar to how most P2P programs do it today (either in the Preferences dialog or somewhere in the main interface). Perhaps someone with a little more in their wallet could add to that bounty. -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJVjGKXAAoJEJdH3pe6/Nu5mPsP/RS74L7odtEEfJWFIFwZvHLn MNBeB7yv0oegLwK27TPWb/+R+HPTEtW2/q+9xN8GzuyZnfsVoIjWb7mykQm1ILH4 TcGveXvcBYa1TeeZTBoiyrE5qDAN3I15wS+FF97+xANoYY+cmYG3MCd+ctfGT9qb m7/34ppPqTVWD/pAD/A+oIJvPpgsl1nxy78qPCeKHBaSGuCGUqwC2oMOWenwGk7w m+EwJxaWTa60i2+nsACJtUvEHAB+v3LM3dNrNlupxt+Ym47kTCSN99fDJZmvK6 ptI08tSVQz5KbDbqZ7prZdHATBsE0xrI9rMwZYMzv1Vda0vDSR4ggoJOa6JGutqa X33EmzkXk5s7p9DCpcb+4aIucTRknM/oBB/IorIL9bq+Mh6k2MIaguxb+9a446iL dsFRh55t6PAifunVkrFvQyRSqA7MZtQ3wzBP62H2b8oPLwJ4D/eF8WKAGPnUn6YP IOhhvJf9XXKrP42Tvo/cIcPhMnvAF+bMVV0AbTxWzTSHA4qwdfnPlL0AdBCQFhm0 ulCkI9VftzqwGfNl6VPurhOCK2ZGSvaEsc+Zbz2uUex/orf23ihw08ksJjUI9DVP nY82GgULW0wrusQmFmSCaHPsQi2EbUurEcvNiWRWd0ZrayT05zgjtSGregjrdwLR GbGVT+jJHBPBeH+ohbEW =aZqc -----END PGP SIGNATURE----- 
For signature verification, my GPG key with fingerprint 69E7 EB65 1CB6 19DE 9153 3A2B D16B 4CC5 857D 0298 is available at https://np.reddit.com/publickeyexchange/comments/2cmfob/sapiophiles_public_key/, on the major SKS keyservers and on KeyBase at https://keybase.io/sapiophile - my KeyBase proof for this reddit username can be found at https://np.reddit.com/KeybaseProofs/comments/2dfzvj/my_keybase_proof_redditsapiophile/
EDIT: Bounty is now up to 1.65 BTC + $48 in BTC (1.85 BTC total at this time), thanks to wserd, globramma2, CanaryInTheMine, hellobitcoinworld, imaginary_username, Huntred, Melting_Harps, SD7, zebrahat, jefdaj, and especially Place60! Who's next to help sweeten the pot?
submitted by sapiophile to Bitcoin [link] [comments]

Gavin prediction that ETH full node count will continue to stay higher than Bitcoin. Even with higher bandwidth requirements. Is bandwidth really a limiter on decentralization?

Gavin prediction that ETH full node count will continue to stay higher than Bitcoin. Even with higher bandwidth requirements. Is bandwidth really a limiter on decentralization? submitted by AnonymousRev to Bitcoin [link] [comments]

If I have to limit my full-node bandwidth, what's the best thing to disable? /r/Bitcoin

If I have to limit my full-node bandwidth, what's the best thing to disable? /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Guide: How to run a full node on Windows when you have monthly bandwidth limits /r/Bitcoin

Guide: How to run a full node on Windows when you have monthly bandwidth limits /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Gavin prediction that ETH full node count will continue to stay higher than Bitcoin. Even with higher bandwidth requirements. Is bandwidth really a limiter on decentralization?

Gavin prediction that ETH full node count will continue to stay higher than Bitcoin. Even with higher bandwidth requirements. Is bandwidth really a limiter on decentralization? submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Comparison between Avalanche, Cosmos and Polkadot

Comparison between Avalanche, Cosmos and Polkadot
Reposting after was mistakenly removed by mods (since resolved - Thanks)
A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important.
For better formatting see https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b
https://preview.redd.it/e8s7dj3ivpq51.png?width=428&format=png&auto=webp&s=5d0463462702637118c7527ebf96e91f4a80b290

Overview

Cosmos

Cosmos is a heterogeneous network of many independent parallel blockchains, each powered by classical BFT consensus algorithms like Tendermint. Developers can easily build custom application specific blockchains, called Zones, through the Cosmos SDK framework. These Zones connect to Hubs, which are specifically designed to connect zones together.
The vision of Cosmos is to have thousands of Zones and Hubs that are Interoperable through the Inter-Blockchain Communication Protocol (IBC). Cosmos can also connect to other systems through peg zones, which are specifically designed zones that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Cosmos does not use Sharding with each Zone and Hub being sovereign with their own validator set.
For a more in-depth look at Cosmos and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
(There's a youtube video with a quick video overview of Cosmos on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Polkadot

Polkadot is a heterogeneous blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain. Developers can easily build custom application specific parachains through the Substrate development framework.
The relay chain validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. The hope is to have 100 parachains connect to the relay chain.
For a more in-depth look at Polkadot and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
(There's a youtube video with a quick video overview of Polkadot on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Avalanche

Avalanche is a platform of platforms, ultimately consisting of thousands of subnets to form a heterogeneous interoperable network of many blockchains, that takes advantage of the revolutionary Avalanche Consensus protocols to provide a secure, globally distributed, interoperable and trustless framework offering unprecedented decentralisation whilst being able to comply with regulatory requirements.
Avalanche allows anyone to create their own tailor-made application specific blockchains, supporting multiple custom virtual machines such as EVM and WASM and written in popular languages like Go (with others coming in the future) rather than lightly used, poorly-understood languages like Solidity. This virtual machine can then be deployed on a custom blockchain network, called a subnet, which consist of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance.
Avalanche was built with serving financial markets in mind. It has native support for easily creating and trading digital smart assets with complex custom rule sets that define how the asset is handled and traded to ensure regulatory compliance can be met. Interoperability is enabled between blockchains within a subnet as well as between subnets. Like Cosmos and Polkadot, Avalanche is also able to connect to other systems through bridges, through custom virtual machines made to interact with another ecosystem such as Ethereum and Bitcoin.
For a more in-depth look at Avalanche and provide more reference to points made in this article, please see here and here
(There's a youtube video with a quick video overview of Avalanche on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)

Comparison between Cosmos, Polkadot and Avalanche

A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions. I want to stress that it’s not a case of one platform being the killer of all other platforms, far from it. There won’t be one platform to rule them all, and too often the tribalism has plagued this space. Blockchains are going to completely revolutionise most industries and have a profound effect on the world we know today. It’s still very early in this space with most adoption limited to speculation and trading mainly due to the limitations of Blockchain and current iteration of Ethereum, which all three of these platforms hope to address. For those who just want a quick summary see the image at the bottom of the article. With that said let’s have a look

Scalability

Cosmos

Each Zone and Hub in Cosmos is capable of up to around 1000 transactions per second with bandwidth being the bottleneck in consensus. Cosmos aims to have thousands of Zones and Hubs all connected through IBC. There is no limit on the number of Zones / Hubs that can be created

Polkadot

Parachains in Polkadot are also capable of up to around 1500 transactions per second. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool, the performance of a parachain is split between many parathreads offering lower performance and compete amongst themselves in a per-block auction to have their transactions included in the next relay chain block. The number of parachains is limited by the number of validators on the relay chain, they hope to be able to achieve 100 parachains.

Avalanche

Avalanche is capable of around 4500 transactions per second per subnet, this is based on modest hardware requirements to ensure maximum decentralisation of just 2 CPU cores and 4 GB of Memory and with a validator size of over 2,000 nodes. Performance is CPU-bound and if higher performance is required then more specialised subnets can be created with higher minimum requirements to be able to achieve 10,000 tps+ in a subnet. Avalanche aims to have thousands of subnets (each with multiple virtual machines / blockchains) all interoperable with each other. There is no limit on the number of Subnets that can be created.

Results

All three platforms offer vastly superior performance to the likes of Bitcoin and Ethereum 1.0. Avalanche with its higher transactions per second, no limit on the number of subnets / blockchains that can be created and the consensus can scale to potentially millions of validators all participating in consensus scores ✅✅✅. Polkadot claims to offer more tps than cosmos, but is limited to the number of parachains (around 100) whereas with Cosmos there is no limit on the number of hubs / zones that can be created. Cosmos is limited to a fairly small validator size of around 200 before performance degrades whereas Polkadot hopes to be able to reach 1000 validators in the relay chain (albeit only a small number of validators are assigned to each parachain). Thus Cosmos and Polkadot scores ✅✅
https://preview.redd.it/2o0brllyvpq51.png?width=1000&format=png&auto=webp&s=8f62bb696ecaafcf6184da005d5fe0129d504518

Decentralisation

Cosmos

Tendermint consensus is limited to around 200 validators before performance starts to degrade. Whilst there is the Cosmos Hub it is one of many hubs in the network and there is no central hub or limit on the number of zones / hubs that can be created.

Polkadot

Polkadot has 1000 validators in the relay chain and these are split up into a small number that validate each parachain (minimum of 14). The relay chain is a central point of failure as all parachains connect to it and the number of parachains is limited depending on the number of validators (they hope to achieve 100 parachains). Due to the limited number of parachain slots available, significant sums of DOT will need to be purchased to win an auction to lease the slot for up to 24 months at a time. Thus likely to lead to only those with enough funds to secure a parachain slot. Parathreads are however an alternative for those that require less and more varied performance for those that can’t secure a parachain slot.

Avalanche

Avalanche consensus scan scale to tens of thousands of validators, even potentially millions of validators all participating in consensus through repeated sub-sampling. The more validators, the faster the network becomes as the load is split between them. There are modest hardware requirements so anyone can run a node and there is no limit on the number of subnets / virtual machines that can be created.

Results

Avalanche offers unparalleled decentralisation using its revolutionary consensus protocols that can scale to millions of validators all participating in consensus at the same time. There is no limit to the number of subnets and virtual machines that can be created, and they can be created by anyone for a small fee, it scores ✅✅✅. Cosmos is limited to 200 validators but no limit on the number of zones / hubs that can be created, which anyone can create and scores ✅✅. Polkadot hopes to accommodate 1000 validators in the relay chain (albeit these are split amongst each of the parachains). The number of parachains is limited and maybe cost prohibitive for many and the relay chain is a ultimately a single point of failure. Whilst definitely not saying it’s centralised and it is more decentralised than many others, just in comparison between the three, it scores ✅
https://preview.redd.it/ckfamee0wpq51.png?width=1000&format=png&auto=webp&s=c4355f145d821fabf7785e238dbc96a5f5ce2846

Latency

Cosmos

Tendermint consensus used in Cosmos reaches finality within 6 seconds. Cosmos consists of many Zones and Hubs that connect to each other. Communication between 2 zones could pass through many hubs along the way, thus also can contribute to latency times depending on the path taken as explained in part two of the articles on Cosmos. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Polkadot

Polkadot provides a Hybrid consensus protocol consisting of Block producing protocol, BABE, and then a finality gadget called GRANDPA that works to agree on a chain, out of many possible forks, by following some simpler fork choice rule. Rather than voting on every block, instead it reaches agreements on chains. As soon as more than 2/3 of validators attest to a chain containing a certain block, all blocks leading up to that one are finalized at once.
If an invalid block is detected after it has been finalised then the relay chain would need to be reverted along with every parachain. This is particularly important when connecting to external blockchains as those don’t share the state of the relay chain and thus can’t be rolled back. The longer the time period, the more secure the network is, as there is more time for additional checks to be performed and reported but at the expense of finality. Finality is reached within 60 seconds between parachains but for external ecosystems like Ethereum their state obviously can’t be rolled back like a parachain and so finality will need to be much longer (60 minutes was suggested in the whitepaper) and discussed in more detail in part three

Avalanche

Avalanche consensus achieves finality within 3 seconds, with most happening sub 1 second, immutable and completely irreversible. Any subnet can connect directly to another without having to go through multiple hops and any VM can talk to another VM within the same subnet as well as external subnets. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Results

With regards to performance far too much emphasis is just put on tps as a metric, the other equally important metric, if not more important with regards to finance is latency. Throughput measures the amount of data at any given time that it can handle whereas latency is the amount of time it takes to perform an action. It’s pointless saying you can process more transactions per second than VISA when it takes 60 seconds for a transaction to complete. Low latency also greatly increases general usability and customer satisfaction, nowadays everyone expects card payments, online payments to happen instantly. Avalanche achieves the best results scoring ✅✅✅, Cosmos with comes in second with 6 second finality ✅✅ and Polkadot with 60 second finality (which may be 60 minutes for external blockchains) scores ✅
https://preview.redd.it/kzup5x42wpq51.png?width=1000&format=png&auto=webp&s=320eb4c25dc4fc0f443a7a2f7ff09567871648cd

Shared Security

Cosmos

Every Zone and Hub in Cosmos has their own validator set and different trust assumptions. Cosmos are researching a shared security model where a Hub can validate the state of connected zones for a fee but not released yet. Once available this will make shared security optional rather than mandatory.

Polkadot

Shared Security is mandatory with Polkadot which uses a Shared State infrastructure between the Relay Chain and all of the connected parachains. If the Relay Chain must revert for any reason, then all of the parachains would also revert. Every parachain makes the same trust assumptions, and as such the relay chain validates state transition and enables seamless interoperability between them. In return for this benefit, they have to purchase DOT and win an auction for one of the available parachain slots.
However, parachains can’t just rely on the relay chain for their security, they will also need to implement censorship resistance measures and utilise proof of work / proof of stake for each parachain as well as discussed in part three, thus parachains can’t just rely on the security of the relay chain, they need to ensure sybil resistance mechanisms using POW and POS are implemented on the parachain as well.

Avalanche

A subnet in Avalanche consists of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. So unlike in Cosmos where each zone / hub has their own validators, A subnet can validate a single or many virtual machines / blockchains with a single validator set. Shared security is optional

Results

Shared security is mandatory in polkadot and a key design decision in its infrastructure. The relay chain validates the state transition of all connected parachains and thus scores ✅✅✅. Subnets in Avalanche can validate state of either a single or many virtual machines. Each subnet can have their own token and shares a validator set, where complex rulesets can be configured to meet regulatory compliance. It scores ✅ ✅. Every Zone and Hub in cosmos has their own validator set / token but research is underway to have the hub validate the state transition of connected zones, but as this is still early in the research phase scores ✅ for now.
https://preview.redd.it/pbgyk3o3wpq51.png?width=1000&format=png&auto=webp&s=61c18e12932a250f5633c40633810d0f64520575

Current Adoption

Cosmos

The Cosmos project started in 2016 with an ICO held in April 2017. There are currently around 50 projects building on the Cosmos SDK with a full list can be seen here and filtering for Cosmos SDK . Not all of the projects will necessarily connect using native cosmos sdk and IBC and some have forked parts of the Cosmos SDK and utilise the tendermint consensus such as Binance Chain but have said they will connect in the future.

Polkadot

The Polkadot project started in 2016 with an ICO held in October 2017. There are currently around 70 projects building on Substrate and a full list can be seen here and filtering for Substrate Based. Like with Cosmos not all projects built using substrate will necessarily connect to Polkadot and parachains or parathreads aren’t currently implemented in either the Live or Test network (Kusama) as of the time of this writing.

Avalanche

Avalanche in comparison started much later with Ava Labs being founded in 2018. Avalanche held it’s ICO in July 2020. Due to lot shorter time it has been in development, the number of projects confirmed are smaller with around 14 projects currently building on Avalanche. Due to the customisability of the platform though, many virtual machines can be used within a subnet making the process incredibly easy to port projects over. As an example, it will launch with the Ethereum Virtual Machine which enables byte for byte compatibility and all the tooling like Metamask, Truffle etc. will work, so projects can easily move over to benefit from the performance, decentralisation and low gas fees offered. In the future Cosmos and Substrate virtual machines could be implemented on Avalanche.

Results

Whilst it’s still early for all 3 projects (and the entire blockchain space as a whole), there is currently more projects confirmed to be building on Cosmos and Polkadot, mostly due to their longer time in development. Whilst Cosmos has fewer projects, zones are implemented compared to Polkadot which doesn’t currently have parachains. IBC to connect zones and hubs together is due to launch Q2 2021, thus both score ✅✅✅. Avalanche has been in development for a lot shorter time period, but is launching with an impressive feature set right from the start with ability to create subnets, VMs, assets, NFTs, permissioned and permissionless blockchains, cross chain atomic swaps within a subnet, smart contracts, bridge to Ethereum etc. Applications can easily port over from other platforms and use all the existing tooling such as Metamask / Truffle etc but benefit from the performance, decentralisation and low gas fees offered. Currently though just based on the number of projects in comparison it scores ✅.
https://preview.redd.it/4zpi6s85wpq51.png?width=1000&format=png&auto=webp&s=e91ade1a86a5d50f4976f3b23a46e9287b08e373

Enterprise Adoption

Cosmos

Cosmos enables permissioned and permissionless zones which can connect to each other with the ability to have full control over who validates the blockchain. For permissionless zones each zone / hub can have their own token and they are in control who validates.

Polkadot

With polkadot the state transition is performed by a small randomly selected assigned group of validators from the relay chain plus with the possibility that state is rolled back if an invalid transaction of any of the other parachains is found. This may pose a problem for enterprises that need complete control over who performs validation for regulatory reasons. In addition due to the limited number of parachain slots available Enterprises would have to acquire and lock up large amounts of a highly volatile asset (DOT) and have the possibility that they are outbid in future auctions and find they no longer can have their parachain validated and parathreads don’t provide the guaranteed performance requirements for the application to function.

Avalanche

Avalanche enables permissioned and permissionless subnets and complex rulesets can be configured to meet regulatory compliance. For example a subnet can be created where its mandatory that all validators are from a certain legal jurisdiction, or they hold a specific license and regulated by the SEC etc. Subnets are also able to scale to tens of thousands of validators, and even potentially millions of nodes, all participating in consensus so every enterprise can run their own node rather than only a small amount. Enterprises don’t have to hold large amounts of a highly volatile asset, but instead pay a fee in AVAX for the creation of the subnets and blockchains which is burnt.

Results

Avalanche provides the customisability to run private permissioned blockchains as well as permissionless where the enterprise is in control over who validates the blockchain, with the ability to use complex rulesets to meet regulatory compliance, thus scores ✅✅✅. Cosmos is also able to run permissioned and permissionless zones / hubs so enterprises have full control over who validates a blockchain and scores ✅✅. Polkadot requires locking up large amounts of a highly volatile asset with the possibility of being outbid by competitors and being unable to run the application if the guaranteed performance is required and having to migrate away. The relay chain validates the state transition and can roll back the parachain should an invalid block be detected on another parachain, thus scores ✅.
https://preview.redd.it/li5jy6u6wpq51.png?width=1000&format=png&auto=webp&s=e2a95f1f88e5efbcf9e23c789ae0f002c8eb73fc

Interoperability

Cosmos

Cosmos will connect Hubs and Zones together through its IBC protocol (due to release in Q1 2020). Connecting to blockchains outside of the Cosmos ecosystem would either require the connected blockchain to fork their code to implement IBC or more likely a custom “Peg Zone” will be created specific to work with a particular blockchain it’s trying to bridge to such as Ethereum etc. Each Zone and Hub has different trust levels and connectivity between 2 zones can have different trust depending on which path it takes (this is discussed more in this article). Finality time is low at 6 seconds, but depending on the number of hops, this can increase significantly.

Polkadot

Polkadot’s shared state means each parachain that connects shares the same trust assumptions, of the relay chain validators and that if one blockchain needs to be reverted, all of them will need to be reverted. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Finality time between parachains is around 60 seconds, but longer will be needed (initial figures of 60 minutes in the whitepaper) for connecting to external blockchains. Thus limiting the appeal of connecting two external ecosystems together through Polkadot. Polkadot is also limited in the number of Parachain slots available, thus limiting the amount of blockchains that can be bridged. Parathreads could be used for lower performance bridges, but the speed of future blockchains is only going to increase.

Avalanche

A subnet can validate multiple virtual machines / blockchains and all blockchains within a subnet share the same trust assumptions / validator set, enabling cross chain interoperability. Interoperability is also possible between any other subnet, with the hope Avalanche will consist of thousands of subnets. Each subnet may have a different trust level, but as the primary network consists of all validators then this can be used as a source of trust if required. As Avalanche supports many virtual machines, bridges to other ecosystems are created by running the connected virtual machine. There will be an Ethereum bridge using the EVM shortly after mainnet. Finality time is much faster at sub 3 seconds (with most happening under 1 second) with no chance of rolling back so more appealing when connecting to external blockchains.

Results

All 3 systems are able to perform interoperability within their ecosystem and transfer assets as well as data, as well as use bridges to connect to external blockchains. Cosmos has different trust levels between its zones and hubs and can create issues depending on which path it takes and additional latency added. Polkadot provides the same trust assumptions for all connected parachains but has long finality and limited number of parachain slots available. Avalanche provides the same trust assumptions for all blockchains within a subnet, and different trust levels between subnets. However due to the primary network consisting of all validators it can be used for trust. Avalanche also has a much faster finality time with no limitation on the number of blockchains / subnets / bridges that can be created. Overall all three blockchains excel with interoperability within their ecosystem and each score ✅✅.
https://preview.redd.it/ai0bkbq8wpq51.png?width=1000&format=png&auto=webp&s=3e85ee6a3c4670f388ccea00b0c906c3fb51e415

Tokenomics

Cosmos

The ATOM token is the native token for the Cosmos Hub. It is commonly mistaken by people that think it’s the token used throughout the cosmos ecosystem, whereas it’s just used for one of many hubs in Cosmos, each with their own token. Currently ATOM has little utility as IBC isn’t released and has no connections to other zones / hubs. Once IBC is released zones may prefer to connect to a different hub instead and so ATOM is not used. ATOM isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for ATOM as of the time of this writing is $1 Billion with 203 million circulating supply. Rewards can be earnt through staking to offset the dilution caused by inflation. Delegators can also get slashed and lose a portion of their ATOM should the validator misbehave.

Polkadot

Polkadot’s native token is DOT and it’s used to secure the Relay Chain. Each parachain needs to acquire sufficient DOT to win an auction on an available parachain lease period of up to 24 months at a time. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. DOT isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for DOT as of the time of this writing is $4.4 Billion with 852 million circulating supply. Delegators can also get slashed and lose their DOT (potentially 100% of their DOT for serious attacks) should the validator misbehave.

Avalanche

AVAX is the native token for the primary network in Avalanche. Every validator of any subnet also has to validate the primary network and stake a minimum of 2000 AVAX. There is no limit to the number of validators like other consensus methods then this can cater for tens of thousands even potentially millions of validators. As every validator validates the primary network, this can be a source of trust for interoperability between subnets as well as connecting to other ecosystems, thus increasing amount of transaction fees of AVAX. There is no slashing in Avalanche, so there is no risk to lose your AVAX when selecting a validator, instead rewards earnt for staking can be slashed should the validator misbehave. Because Avalanche doesn’t have direct slashing, it is technically possible for someone to both stake AND deliver tokens for something like a flash loan, under the invariant that all tokens that are staked are returned, thus being able to make profit with staked tokens outside of staking itself.
There will also be a separate subnet for Athereum which is a ‘spoon,’ or friendly fork, of Ethereum, which benefits from the Avalanche consensus protocol and applications in the Ethereum ecosystem. It’s native token ATH will be airdropped to ETH holders as well as potentially AVAX holders as well. This can be done for other blockchains as well.
Transaction fees on the primary network for all 3 of the blockchains as well as subscription fees for creating a subnet and blockchain are paid in AVAX and are burnt, creating deflationary pressure. AVAX is a fixed capped supply of 720 million tokens, creating scarcity rather than an unlimited supply which continuously increase of tokens at a compounded rate each year like others. Initially there will be 360 tokens minted at Mainnet with vesting periods between 1 and 10 years, with tokens gradually unlocking each quarter. The Circulating supply is 24.5 million AVAX with tokens gradually released each quater. The current market cap of AVAX is around $100 million.

Results

Avalanche’s AVAX with its fixed capped supply, deflationary pressure, very strong utility, potential to receive air drops and low market cap, means it scores ✅✅✅. Polkadot’s DOT also has very strong utility with the need for auctions to acquire parachain slots, but has no deflationary mechanisms, no fixed capped supply and already valued at $3.8 billion, therefore scores ✅✅. Cosmos’s ATOM token is only for the Cosmos Hub, of which there will be many hubs in the ecosystem and has very little utility currently. (this may improve once IBC is released and if Cosmos hub actually becomes the hub that people want to connect to and not something like Binance instead. There is no fixed capped supply and currently valued at $1.1 Billion, so scores ✅.
https://preview.redd.it/mels7myawpq51.png?width=1000&format=png&auto=webp&s=df9782e2c0a4c26b61e462746256bdf83b1fb906
All three are excellent projects and have similarities as well as many differences. Just to reiterate this article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions, you may have different criteria which is important to you, and score them differently. There won’t be one platform to rule them all however, with some uses cases better suited to one platform over another, and it’s not a zero-sum game. Blockchain is going to completely revolutionize industries and the Internet itself. The more projects researching and delivering breakthrough technology the better, each learning from each other and pushing each other to reach that goal earlier. The current market is a tiny speck of what’s in store in terms of value and adoption and it’s going to be exciting to watch it unfold.
https://preview.redd.it/dbb99egcwpq51.png?width=1388&format=png&auto=webp&s=aeb03127dc0dc74d0507328e899db1c7d7fc2879
For more information see the articles below (each with additional sources at the bottom of their articles)
Avalanche, a Revolutionary Consensus Engine and Platform. A Game Changer for Blockchain
Avalanche Consensus, The Biggest Breakthrough since Nakamoto
Cosmos — An Early In-Depth Analysis — Part One
Cosmos — An Early In-Depth Analysis — Part Two
Cosmos Hub ATOM Token and the commonly misunderstood staking tokens — Part Three
Polkadot — An Early In-Depth Analysis — Part One — Overview and Benefits
Polkadot — An Early In-Depth Analysis — Part Two — How Consensus Works
Polkadot — An Early In-Depth Analysis — Part Three — Limitations and Issues
submitted by xSeq22x to CryptoCurrency [link] [comments]

[ CryptoCurrency ] Comparison between Avalanche, Cosmos and Polkadot

[ 🔴 DELETED 🔴 ] Topic originally posted in CryptoCurrency by xSeq22x [link]
A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important.
For better formatting see https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b
https://preview.redd.it/lg16iwk2dhq51.png?width=428&format=png&auto=webp&s=6c899ee69800dd6c5e2900d8fa83de7a43c57086

Overview

Cosmos

Cosmos is a heterogeneous network of many independent parallel blockchains, each powered by classical BFT consensus algorithms like Tendermint. Developers can easily build custom application specific blockchains, called Zones, through the Cosmos SDK framework. These Zones connect to Hubs, which are specifically designed to connect zones together.
The vision of Cosmos is to have thousands of Zones and Hubs that are Interoperable through the Inter-Blockchain Communication Protocol (IBC). Cosmos can also connect to other systems through peg zones, which are specifically designed zones that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Cosmos does not use Sharding with each Zone and Hub being sovereign with their own validator set.
For a more in-depth look at Cosmos and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
https://youtu.be/Eb8xkDi_PUg

Polkadot

Polkadot is a heterogeneous blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain. Developers can easily build custom application specific parachains through the Substrate development framework.
The relay chain validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. The hope is to have 100 parachains connect to the relay chain.
For a more in-depth look at Polkadot and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three
https://youtu.be/_-k0xkooSlA

Avalanche

Avalanche is a platform of platforms, ultimately consisting of thousands of subnets to form a heterogeneous interoperable network of many blockchains, that takes advantage of the revolutionary Avalanche Consensus protocols to provide a secure, globally distributed, interoperable and trustless framework offering unprecedented decentralisation whilst being able to comply with regulatory requirements.
Avalanche allows anyone to create their own tailor-made application specific blockchains, supporting multiple custom virtual machines such as EVM and WASM and written in popular languages like Go (with others coming in the future) rather than lightly used, poorly-understood languages like Solidity. This virtual machine can then be deployed on a custom blockchain network, called a subnet, which consist of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance.
Avalanche was built with serving financial markets in mind. It has native support for easily creating and trading digital smart assets with complex custom rule sets that define how the asset is handled and traded to ensure regulatory compliance can be met. Interoperability is enabled between blockchains within a subnet as well as between subnets. Like Cosmos and Polkadot, Avalanche is also able to connect to other systems through bridges, through custom virtual machines made to interact with another ecosystem such as Ethereum and Bitcoin.
For a more in-depth look at Avalanche and provide more reference to points made in this article, please see here and here
https://youtu.be/mWBzFmzzBAg

Comparison between Cosmos, Polkadot and Avalanche

A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions. I want to stress that it’s not a case of one platform being the killer of all other platforms, far from it. There won’t be one platform to rule them all, and too often the tribalism has plagued this space. Blockchains are going to completely revolutionise most industries and have a profound effect on the world we know today. It’s still very early in this space with most adoption limited to speculation and trading mainly due to the limitations of Blockchain and current iteration of Ethereum, which all three of these platforms hope to address. For those who just want a quick summary see the image at the bottom of the article. With that said let’s have a look

Scalability

Cosmos

Each Zone and Hub in Cosmos is capable of up to around 1000 transactions per second with bandwidth being the bottleneck in consensus. Cosmos aims to have thousands of Zones and Hubs all connected through IBC. There is no limit on the number of Zones / Hubs that can be created

Polkadot

Parachains in Polkadot are also capable of up to around 1500 transactions per second. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool, the performance of a parachain is split between many parathreads offering lower performance and compete amongst themselves in a per-block auction to have their transactions included in the next relay chain block. The number of parachains is limited by the number of validators on the relay chain, they hope to be able to achieve 100 parachains.

Avalanche

Avalanche is capable of around 4500 transactions per second per subnet, this is based on modest hardware requirements to ensure maximum decentralisation of just 2 CPU cores and 4 GB of Memory and with a validator size of over 2,000 nodes. Performance is CPU-bound and if higher performance is required then more specialised subnets can be created with higher minimum requirements to be able to achieve 10,000 tps+ in a subnet. Avalanche aims to have thousands of subnets (each with multiple virtual machines / blockchains) all interoperable with each other. There is no limit on the number of Subnets that can be created.

Results

All three platforms offer vastly superior performance to the likes of Bitcoin and Ethereum 1.0. Avalanche with its higher transactions per second, no limit on the number of subnets / blockchains that can be created and the consensus can scale to potentially millions of validators all participating in consensus scores ✅✅✅. Polkadot claims to offer more tps than cosmos, but is limited to the number of parachains (around 100) whereas with Cosmos there is no limit on the number of hubs / zones that can be created. Cosmos is limited to a fairly small validator size of around 200 before performance degrades whereas Polkadot hopes to be able to reach 1000 validators in the relay chain (albeit only a small number of validators are assigned to each parachain). Thus Cosmos and Polkadot scores ✅✅
https://preview.redd.it/ththwq5qdhq51.png?width=1000&format=png&auto=webp&s=92f75152c90d984911db88ed174ebf3a147ca70d

Decentralisation

Cosmos

Tendermint consensus is limited to around 200 validators before performance starts to degrade. Whilst there is the Cosmos Hub it is one of many hubs in the network and there is no central hub or limit on the number of zones / hubs that can be created.

Polkadot

Polkadot has 1000 validators in the relay chain and these are split up into a small number that validate each parachain (minimum of 14). The relay chain is a central point of failure as all parachains connect to it and the number of parachains is limited depending on the number of validators (they hope to achieve 100 parachains). Due to the limited number of parachain slots available, significant sums of DOT will need to be purchased to win an auction to lease the slot for up to 24 months at a time. Thus likely to lead to only those with enough funds to secure a parachain slot. Parathreads are however an alternative for those that require less and more varied performance for those that can’t secure a parachain slot.

Avalanche

Avalanche consensus scan scale to tens of thousands of validators, even potentially millions of validators all participating in consensus through repeated sub-sampling. The more validators, the faster the network becomes as the load is split between them. There are modest hardware requirements so anyone can run a node and there is no limit on the number of subnets / virtual machines that can be created.

Results

Avalanche offers unparalleled decentralisation using its revolutionary consensus protocols that can scale to millions of validators all participating in consensus at the same time. There is no limit to the number of subnets and virtual machines that can be created, and they can be created by anyone for a small fee, it scores ✅✅✅. Cosmos is limited to 200 validators but no limit on the number of zones / hubs that can be created, which anyone can create and scores ✅✅. Polkadot hopes to accommodate 1000 validators in the relay chain (albeit these are split amongst each of the parachains). The number of parachains is limited and maybe cost prohibitive for many and the relay chain is a ultimately a single point of failure. Whilst definitely not saying it’s centralised and it is more decentralised than many others, just in comparison between the three, it scores ✅
https://preview.redd.it/lv2h7g9sdhq51.png?width=1000&format=png&auto=webp&s=56eada6e8c72dbb4406d7c5377ad15608bcc730e

Latency

Cosmos

Tendermint consensus used in Cosmos reaches finality within 6 seconds. Cosmos consists of many Zones and Hubs that connect to each other. Communication between 2 zones could pass through many hubs along the way, thus also can contribute to latency times depending on the path taken as explained in part two of the articles on Cosmos. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Polkadot

Polkadot provides a Hybrid consensus protocol consisting of Block producing protocol, BABE, and then a finality gadget called GRANDPA that works to agree on a chain, out of many possible forks, by following some simpler fork choice rule. Rather than voting on every block, instead it reaches agreements on chains. As soon as more than 2/3 of validators attest to a chain containing a certain block, all blocks leading up to that one are finalized at once.
If an invalid block is detected after it has been finalised then the relay chain would need to be reverted along with every parachain. This is particularly important when connecting to external blockchains as those don’t share the state of the relay chain and thus can’t be rolled back. The longer the time period, the more secure the network is, as there is more time for additional checks to be performed and reported but at the expense of finality. Finality is reached within 60 seconds between parachains but for external ecosystems like Ethereum their state obviously can’t be rolled back like a parachain and so finality will need to be much longer (60 minutes was suggested in the whitepaper) and discussed in more detail in part three

Avalanche

Avalanche consensus achieves finality within 3 seconds, with most happening sub 1 second, immutable and completely irreversible. Any subnet can connect directly to another without having to go through multiple hops and any VM can talk to another VM within the same subnet as well as external subnets. It doesn’t need to wait for an extended period of time with risk of rollbacks.

Results

With regards to performance far too much emphasis is just put on tps as a metric, the other equally important metric, if not more important with regards to finance is latency. Throughput measures the amount of data at any given time that it can handle whereas latency is the amount of time it takes to perform an action. It’s pointless saying you can process more transactions per second than VISA when it takes 60 seconds for a transaction to complete. Low latency also greatly increases general usability and customer satisfaction, nowadays everyone expects card payments, online payments to happen instantly. Avalanche achieves the best results scoring ✅✅✅, Cosmos with comes in second with 6 second finality ✅✅ and Polkadot with 60 second finality (which may be 60 minutes for external blockchains) scores ✅
https://preview.redd.it/qe8e5ltudhq51.png?width=1000&format=png&auto=webp&s=18a2866104590f81a818690337f9121161dda890

Shared Security

Cosmos

Every Zone and Hub in Cosmos has their own validator set and different trust assumptions. Cosmos are researching a shared security model where a Hub can validate the state of connected zones for a fee but not released yet. Once available this will make shared security optional rather than mandatory.

Polkadot

Shared Security is mandatory with Polkadot which uses a Shared State infrastructure between the Relay Chain and all of the connected parachains. If the Relay Chain must revert for any reason, then all of the parachains would also revert. Every parachain makes the same trust assumptions, and as such the relay chain validates state transition and enables seamless interoperability between them. In return for this benefit, they have to purchase DOT and win an auction for one of the available parachain slots.
However, parachains can’t just rely on the relay chain for their security, they will also need to implement censorship resistance measures and utilise proof of work / proof of stake for each parachain as well as discussed in part three, thus parachains can’t just rely on the security of the relay chain, they need to ensure sybil resistance mechanisms using POW and POS are implemented on the parachain as well.

Avalanche

A subnet in Avalanche consists of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. So unlike in Cosmos where each zone / hub has their own validators, A subnet can validate a single or many virtual machines / blockchains with a single validator set. Shared security is optional

Results

Shared security is mandatory in polkadot and a key design decision in its infrastructure. The relay chain validates the state transition of all connected parachains and thus scores ✅✅✅. Subnets in Avalanche can validate state of either a single or many virtual machines. Each subnet can have their own token and shares a validator set, where complex rulesets can be configured to meet regulatory compliance. It scores ✅ ✅. Every Zone and Hub in cosmos has their own validator set / token but research is underway to have the hub validate the state transition of connected zones, but as this is still early in the research phase scores ✅ for now.
https://preview.redd.it/0mnvpnzwdhq51.png?width=1000&format=png&auto=webp&s=8927ff2821415817265be75c59261f83851a2791

Current Adoption

Cosmos

The Cosmos project started in 2016 with an ICO held in April 2017. There are currently around 50 projects building on the Cosmos SDK with a full list can be seen here and filtering for Cosmos SDK . Not all of the projects will necessarily connect using native cosmos sdk and IBC and some have forked parts of the Cosmos SDK and utilise the tendermint consensus such as Binance Chain but have said they will connect in the future.

Polkadot

The Polkadot project started in 2016 with an ICO held in October 2017. There are currently around 70 projects building on Substrate and a full list can be seen here and filtering for Substrate Based. Like with Cosmos not all projects built using substrate will necessarily connect to Polkadot and parachains or parathreads aren’t currently implemented in either the Live or Test network (Kusama) as of the time of this writing.

Avalanche

Avalanche in comparison started much later with Ava Labs being founded in 2018. Avalanche held it’s ICO in July 2020. Due to lot shorter time it has been in development, the number of projects confirmed are smaller with around 14 projects currently building on Avalanche. Due to the customisability of the platform though, many virtual machines can be used within a subnet making the process incredibly easy to port projects over. As an example, it will launch with the Ethereum Virtual Machine which enables byte for byte compatibility and all the tooling like Metamask, Truffle etc. will work, so projects can easily move over to benefit from the performance, decentralisation and low gas fees offered. In the future Cosmos and Substrate virtual machines could be implemented on Avalanche.

Results

Whilst it’s still early for all 3 projects (and the entire blockchain space as a whole), there is currently more projects confirmed to be building on Cosmos and Polkadot, mostly due to their longer time in development. Whilst Cosmos has fewer projects, zones are implemented compared to Polkadot which doesn’t currently have parachains. IBC to connect zones and hubs together is due to launch Q2 2021, thus both score ✅✅✅. Avalanche has been in development for a lot shorter time period, but is launching with an impressive feature set right from the start with ability to create subnets, VMs, assets, NFTs, permissioned and permissionless blockchains, cross chain atomic swaps within a subnet, smart contracts, bridge to Ethereum etc. Applications can easily port over from other platforms and use all the existing tooling such as Metamask / Truffle etc but benefit from the performance, decentralisation and low gas fees offered. Currently though just based on the number of projects in comparison it scores ✅.
https://preview.redd.it/rsctxi6zdhq51.png?width=1000&format=png&auto=webp&s=ff762dea3cfc2aaaa3c8fc7b1070d5be6759aac2

Enterprise Adoption

Cosmos

Cosmos enables permissioned and permissionless zones which can connect to each other with the ability to have full control over who validates the blockchain. For permissionless zones each zone / hub can have their own token and they are in control who validates.

Polkadot

With polkadot the state transition is performed by a small randomly selected assigned group of validators from the relay chain plus with the possibility that state is rolled back if an invalid transaction of any of the other parachains is found. This may pose a problem for enterprises that need complete control over who performs validation for regulatory reasons. In addition due to the limited number of parachain slots available Enterprises would have to acquire and lock up large amounts of a highly volatile asset (DOT) and have the possibility that they are outbid in future auctions and find they no longer can have their parachain validated and parathreads don’t provide the guaranteed performance requirements for the application to function.

Avalanche

Avalanche enables permissioned and permissionless subnets and complex rulesets can be configured to meet regulatory compliance. For example a subnet can be created where its mandatory that all validators are from a certain legal jurisdiction, or they hold a specific license and regulated by the SEC etc. Subnets are also able to scale to tens of thousands of validators, and even potentially millions of nodes, all participating in consensus so every enterprise can run their own node rather than only a small amount. Enterprises don’t have to hold large amounts of a highly volatile asset, but instead pay a fee in AVAX for the creation of the subnets and blockchains which is burnt.

Results

Avalanche provides the customisability to run private permissioned blockchains as well as permissionless where the enterprise is in control over who validates the blockchain, with the ability to use complex rulesets to meet regulatory compliance, thus scores ✅✅✅. Cosmos is also able to run permissioned and permissionless zones / hubs so enterprises have full control over who validates a blockchain and scores ✅✅. Polkadot requires locking up large amounts of a highly volatile asset with the possibility of being outbid by competitors and being unable to run the application if the guaranteed performance is required and having to migrate away. The relay chain validates the state transition and can roll back the parachain should an invalid block be detected on another parachain, thus scores ✅.
https://preview.redd.it/7phaylb1ehq51.png?width=1000&format=png&auto=webp&s=d86d2ec49de456403edbaf27009ed0e25609fbff

Interoperability

Cosmos

Cosmos will connect Hubs and Zones together through its IBC protocol (due to release in Q1 2020). Connecting to blockchains outside of the Cosmos ecosystem would either require the connected blockchain to fork their code to implement IBC or more likely a custom “Peg Zone” will be created specific to work with a particular blockchain it’s trying to bridge to such as Ethereum etc. Each Zone and Hub has different trust levels and connectivity between 2 zones can have different trust depending on which path it takes (this is discussed more in this article). Finality time is low at 6 seconds, but depending on the number of hops, this can increase significantly.

Polkadot

Polkadot’s shared state means each parachain that connects shares the same trust assumptions, of the relay chain validators and that if one blockchain needs to be reverted, all of them will need to be reverted. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Finality time between parachains is around 60 seconds, but longer will be needed (initial figures of 60 minutes in the whitepaper) for connecting to external blockchains. Thus limiting the appeal of connecting two external ecosystems together through Polkadot. Polkadot is also limited in the number of Parachain slots available, thus limiting the amount of blockchains that can be bridged. Parathreads could be used for lower performance bridges, but the speed of future blockchains is only going to increase.

Avalanche

A subnet can validate multiple virtual machines / blockchains and all blockchains within a subnet share the same trust assumptions / validator set, enabling cross chain interoperability. Interoperability is also possible between any other subnet, with the hope Avalanche will consist of thousands of subnets. Each subnet may have a different trust level, but as the primary network consists of all validators then this can be used as a source of trust if required. As Avalanche supports many virtual machines, bridges to other ecosystems are created by running the connected virtual machine. There will be an Ethereum bridge using the EVM shortly after mainnet. Finality time is much faster at sub 3 seconds (with most happening under 1 second) with no chance of rolling back so more appealing when connecting to external blockchains.

Results

All 3 systems are able to perform interoperability within their ecosystem and transfer assets as well as data, as well as use bridges to connect to external blockchains. Cosmos has different trust levels between its zones and hubs and can create issues depending on which path it takes and additional latency added. Polkadot provides the same trust assumptions for all connected parachains but has long finality and limited number of parachain slots available. Avalanche provides the same trust assumptions for all blockchains within a subnet, and different trust levels between subnets. However due to the primary network consisting of all validators it can be used for trust. Avalanche also has a much faster finality time with no limitation on the number of blockchains / subnets / bridges that can be created. Overall all three blockchains excel with interoperability within their ecosystem and each score ✅✅.
https://preview.redd.it/l775gue3ehq51.png?width=1000&format=png&auto=webp&s=b7c4b5802ceb1a9307bd2a8d65f393d1bcb0d7c6

Tokenomics

Cosmos

The ATOM token is the native token for the Cosmos Hub. It is commonly mistaken by people that think it’s the token used throughout the cosmos ecosystem, whereas it’s just used for one of many hubs in Cosmos, each with their own token. Currently ATOM has little utility as IBC isn’t released and has no connections to other zones / hubs. Once IBC is released zones may prefer to connect to a different hub instead and so ATOM is not used. ATOM isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for ATOM as of the time of this writing is $1 Billion with 203 million circulating supply. Rewards can be earnt through staking to offset the dilution caused by inflation. Delegators can also get slashed and lose a portion of their ATOM should the validator misbehave.

Polkadot

Polkadot’s native token is DOT and it’s used to secure the Relay Chain. Each parachain needs to acquire sufficient DOT to win an auction on an available parachain lease period of up to 24 months at a time. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. DOT isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for DOT as of the time of this writing is $4.4 Billion with 852 million circulating supply. Delegators can also get slashed and lose their DOT (potentially 100% of their DOT for serious attacks) should the validator misbehave.

Avalanche

AVAX is the native token for the primary network in Avalanche. Every validator of any subnet also has to validate the primary network and stake a minimum of 2000 AVAX. There is no limit to the number of validators like other consensus methods then this can cater for tens of thousands even potentially millions of validators. As every validator validates the primary network, this can be a source of trust for interoperability between subnets as well as connecting to other ecosystems, thus increasing amount of transaction fees of AVAX. There is no slashing in Avalanche, so there is no risk to lose your AVAX when selecting a validator, instead rewards earnt for staking can be slashed should the validator misbehave. Because Avalanche doesn’t have direct slashing, it is technically possible for someone to both stake AND deliver tokens for something like a flash loan, under the invariant that all tokens that are staked are returned, thus being able to make profit with staked tokens outside of staking itself.
There will also be a separate subnet for Athereum which is a ‘spoon,’ or friendly fork, of Ethereum, which benefits from the Avalanche consensus protocol and applications in the Ethereum ecosystem. It’s native token ATH will be airdropped to ETH holders as well as potentially AVAX holders as well. This can be done for other blockchains as well.
Transaction fees on the primary network for all 3 of the blockchains as well as subscription fees for creating a subnet and blockchain are paid in AVAX and are burnt, creating deflationary pressure. AVAX is a fixed capped supply of 720 million tokens, creating scarcity rather than an unlimited supply which continuously increase of tokens at a compounded rate each year like others. Initially there will be 360 tokens minted at Mainnet with vesting periods between 1 and 10 years, with tokens gradually unlocking each quarter. The Circulating supply is 24.5 million AVAX with tokens gradually released each quater. The current market cap of AVAX is around $100 million.

Results

Avalanche’s AVAX with its fixed capped supply, deflationary pressure, very strong utility, potential to receive air drops and low market cap, means it scores ✅✅✅. Polkadot’s DOT also has very strong utility with the need for auctions to acquire parachain slots, but has no deflationary mechanisms, no fixed capped supply and already valued at $3.8 billion, therefore scores ✅✅. Cosmos’s ATOM token is only for the Cosmos Hub, of which there will be many hubs in the ecosystem and has very little utility currently. (this may improve once IBC is released and if Cosmos hub actually becomes the hub that people want to connect to and not something like Binance instead. There is no fixed capped supply and currently valued at $1.1 Billion, so scores ✅.
https://preview.redd.it/zb72eto5ehq51.png?width=1000&format=png&auto=webp&s=0ee102a2881d763296ad9ffba20667f531d2fd7a
All three are excellent projects and have similarities as well as many differences. Just to reiterate this article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions, you may have different criteria which is important to you, and score them differently. There won’t be one platform to rule them all however, with some uses cases better suited to one platform over another, and it’s not a zero-sum game. Blockchain is going to completely revolutionize industries and the Internet itself. The more projects researching and delivering breakthrough technology the better, each learning from each other and pushing each other to reach that goal earlier. The current market is a tiny speck of what’s in store in terms of value and adoption and it’s going to be exciting to watch it unfold.
https://preview.redd.it/fwi3clz7ehq51.png?width=1388&format=png&auto=webp&s=c91c1645a4c67defd5fc3aaec84f4a765e1c50b6
xSeq22x your post has been copied because one or more comments in this topic have been removed. This copy will preserve unmoderated topic. If you would like to opt-out, please send a message using [this link].
submitted by anticensor_bot to u/anticensor_bot [link] [comments]

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f
On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing.
Sharing Session
Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers.
Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist
Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well.
Q & A
Deep Chain Finance:
First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself?
Eric:
First of all, let me answer the first question, what kind of project is EpiK Protocol.
With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence.
In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity
And then, for what reason was the fork chosen in the first place?
EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario.
However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves.
The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land.
After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent.
Finally, what differences exist between the forked project and Filecoin itself?
On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience.
EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible.
From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral.
Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector.
What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining.
Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins.
If the miner can re-submit the proof of time and space within 28 days, he will regain the power.
Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power.
The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed.
Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain.
Deep Chain Finance:
Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL?
Eric:
Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology.
These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power.
There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough.
The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users.
The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners.
Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them.
The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power.
Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground.
For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier.
Deep Chain Finance:
Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”?
Leo:
Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret.
EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource.
With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small.
We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin.
Deep Chain Finance:
Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone?
Eric:
First, let me explain the advantages of the E2P model over the P2P model.
There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched.
The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs.
The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert.
The first describes the relationship between the two roles.
The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts.
The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks.
Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality.
If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth.
When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem.
The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios.
I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all.
The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way.
For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter.
The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph.
Deep Chain Finance:
Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved?
Leo:
Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first.
There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”.
How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison.
What incentive does the average user have to spend more money on decentralized storage to store data?
Is it safer?
Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each.
More private?
There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon.
Faster?
The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision.
The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in.
Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence.
Deep Chain Finance:
Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation?
Leo:
We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently.
However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land.
We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another.
Deep Chain Finance:
How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future?
Eric:
The development of the EpiK Protocol is divided into 5 major phases.
(a) Phase I testing of the network “Obelisk”.
Phase II Main Network 1.0 “Rosetta”.
Phase III Main Network 2.0 “Hammurabi”.
(a) The Phase IV Enrichment Knowledge Mapping Toolkit.
The fifth stage is to enrich the knowledge graph application ecology.
Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis.
We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet.
In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing.
Users’ Questions
User 1:
Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think?
Eric:
First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track.
First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project.
Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects.
Filecoin consumes the value of the next few years this time, so it will plunge.
Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable.
User2:
Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK?
Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
submitted by EpiK-Protocol to u/EpiK-Protocol [link] [comments]

UYT Main-Net pre-launching AMA successfully completed with a blast

7 pm, 29th September 2020 Beijing time the UYT Main-Net pre-launching AMA successfully completed with a blast!
Here is a full record of the AMA:
Host: Hello everyone, it’s a great honor to host the first AMA of UYT network in China. Today, we have invited the person in charge of UYT Dao.
Let’s ask Mr. Woo to introduce himself Woo: Hello, I’m Ben. I’ve met you in the previous global live broadcast. I’m the director of UYT Dao and the founder of IGNISVC. At present, I’m the CEO of the TKNT foundation and have been engaged in the blockchain industry.
Q1. At present, different types of blockchains have emerged, but cross-chain interaction is still suffering a lot. In your opinion, what is the necessity and significance of cross-chain?
Answer: The full name of UYT is to unite all your tokens, which is to integrate all public chains and increase the liquidity of the whole industry. Our purpose is not to create another public chain, but to become a platform for the exchange of value, technology, and resources of all public chains. What we need to solve is that each individual chain can circulate with each other.
The full name of UYT is to unite all your tokens, which is to integrate all public chains and increase the liquidity of the whole industry. Our purpose is not to create another public chain, but to become a platform for the exchange of value, technology, and resources of all public chains. What we need to solve is that each individual chain can circulate with each other.
Q2. The founder of Ethereum, V Shen, once wrote a cross-chain operation report for bank alliance chain R3, which mentioned three cross-chain methods. Which one does UYT belong to? Can you briefly introduce the cross-chain solution of UYT?
Answer: In Vitalik’s cross-chain report, there are three main cross-chain methods. The first is that both parties do not know that they are crossing the chain, or that they cannot “read” each other, such as the centralized exchange. The second way is that one of the links can read other chains, such as side-chain / relay chain. That is, a can read B, and B cannot read a; The third is that both a and B can read each other’s, which can achieve the value and information exchange between a, B, and the platform. UYT belongs to the third kind.
Our new official website will be online soon. Here are a few simple points: first of all, the architecture of UYT includes relay chain, parachain, parathreads, and bridges. In terms of ductility, it has exceeded almost all the public chains currently online.
In the UYT network, there are four kinds of consensus participants, namely collector, fisherman, nominator, and validator. The characteristics of this model are: first, all people can participate without loss. Secondly, as long as anyone makes more contribution to the ecology, he will get more rewards, otherwise, he will receive corresponding punishment.
The underlying layer of UYT is the substrate, which uses the rust programming language. Rust is committed to becoming a programming language that can solve the problems of high concurrency and high-security systems elegantly. This is also a great advantage that we are different from other blockchain projects in technology.
Q3. What are the roles in the UYT network? What are their respective functions?
Answer: After the main network of UYT is online, there will be four roles: collector, fisherman, nominator, and validator, which is totally different from the current system of the test network.
The collector, in short, is responsible for collecting all kinds of information in the parallel chain and packaging the information to the verifier.
Fishermen, to put it bluntly, is fishing law enforcement, which specifically checks out malicious acts and gets rewards after being checked out.
The nominator, in fact, is a group of rights and interests. The verifier is its representative, and they entrust the deposit to the verifier.
Verifier, package new blocks in the network. It must mortgage enough deposits and run a relay chain client on a highly available and high bandwidth machine. It can be understood as a mining pool. It can also be understood as the node in the current UYT DAPP.
Q4. What is the mining mechanism of the UYT network?
The only way to obtain UYT after its issuance is to participate in mining activities. In the initial stage, the daily constant output times of UYT are set to 1440000, and the cycle of bitcoin is halved. Mining rewards can be obtained in the following five ways:
1) Asset pledge mapping mining 2) Become the intermediate chain node of uyt network 3) Recommendation and reward mechanism 4) Voting reward 5) UYT network Dao will take out 10% of gas revenue from block packaging for community construction and reward of excellent community personnel
Q5. The rise and fall of the blockchain are very fast. In order to give investors confidence, is there a detailed development plan, implementation steps, and application direction of UYT network in the next few months?
Answer: UYT Network test network has been running stably for a year. After the main network is launched, all mechanisms will undergo major changes.
The relationship between the UYT test network and the main network can be understood as the relationship between KSM (dot test network) and dot the main network, and the feasibility of the technology can be reflected more quickly by the UYT test network because of its faster timeliness and all future technology updates Some will move to the main network after the stable operation of the test network.
In order to give users a better experience and give more rewards to excellent nodes, all Dao organizers are working hard for it.
The development team has completed the cross-chain of bitcoin and some high-quality Ethereum based tokens in the early stage, and now the code has all been open source. For other mainstream currencies, community members can apply for funds to develop. In order to develop the ecology and make a better technical reserve, we will set up a special ecological development fund when the main network goes online. The transfer bridge is our key funding direction. The maximum application amount of a team is as high as 100000 US dollars. In addition, if other public chains want to connect to UYT, they will get technical support. In order to encourage developers to participate in ecological construction, Dao also launched a series of grants to support development. Developers can directly pull the better applications on Eth and EOS directly, or develop new products according to their own advantages. These directions are now the focus of funding.
Due to the early online testing time of uyt network, it is based on the earlier version of substrate1.0. The on-chain governance mode can only be realized after the upgrade of 2.0 is completed.
At present, the upgrading work is going on steadily, and the on-chain governance will be implemented in the main network with the launch of the uyt main network.
As a heterogeneous cross-chain solution with high scalability and scalability, UYT network can perfectly bridge the parallel encryption system and its encryption assets in theory, and its wide applicability in the future can be expected. Therefore, we do not limit the areas where UYT network will play its advantages and roles. But in the general direction, there will be mainly DEFI and DEX ecological plates. From the industry, it can cover a wide range of fields, not only finance but also games, entertainment, shopping malls, real estate, and so on.
Q6、How can UYT help DEFI?
Answer: UYT network can not only link different public chains but also make parallel chains independent and interlinked. Just like the ACALA project some time ago, it has successfully obtained Pantera capital’s $7 million saft agreement. Although the concept of DEFI is very popular now, all DEFI products are still in the ecology of each public chain, and the cross-chain DEFI ecology has not been developed. UYT is to achieve cross-chain communication, value exchange, and develop truly decentralized financial services and products. For example, cross-chain decentralized flash cash, cross-chain asset support, cross-chain decentralized lending, Oracle machine, and other products. At present, our technical team is also speeding up the construction of infrastructure suitable for the landing of more DEFI products and services and is committed to creating a real cross-chain DEFI ecology, which is only a small step of UYT’s future plan.
Q7、TKNT should be one of the hottest projects in the UYT ecosystem recently. Please give us a brief introduction to the TKNT project and the value of TKNT in the UYT ecosystem. Why can TKNT increase 400 times in 7 days? And what is the cooperative relationship between UTC and TKNT?
Answer: I will answer each project from the technical and resource aspects. Let’s first introduce UTC. UTC is the token of Copernican network and the first project of UYT game entertainment ecology. In the future, it will be responsible for linking. Due to the high-quality public chain in the entertainment industry, because of the limited slots of UYT, each field will seek a high-quality partner and help the partner become the secondary relay chain of UYT. After the main network of UYT goes online, many chains will want to access UYT Greater value circulation, due to the limited external slots of UYT, the cost is also very high. At this time, you can choose to connect to UTC first, and then connect UTC to UYT. With more and more links with UYT, it will gradually evolve into a secondary relay chain of UYT network. UTC’s resources, online and offline, offline payment and offline entity applications, also have a very large community base.
The ecological partners have very good operation experience in the game industry. They will use blockchain technology to change the whole game entertainment industry to make it more transparent and fair. At the same time, there are enough entity consumption scenarios. This is also UYT Because of the reason why the network chose to cooperate with it, the UTC project has been supported by the UYT ecological fund. The support fund includes that after the main network is launched, it will also be the first ecological cooperation project supported by UYT. Because of the online time of the main network of UYT, UTC can’t directly form a chain at present and will give priority to issuing on Ethereum. TKNT is a new concept project TKN.com TKN is the largest online centralized guessing game platform in the world at present. TKNT mixes bet mining and DEFI, so it can carry out fixed mining through platform games, build a system that can realize game participation and in application payment in all Dapps based on ERC20, and combine with various financial services.
The reason why TKNT has created a myth of 400 times in 7 days is that the TkN platform has a buyback plan. As we all know, the online quiz game entertainment platform has an amazing profit. Every quarter, the profit will be used to buyback. The strong profit support has led to the huge increase of token. In the future, all users can use UTC to participate in TkN games. Therefore, the main network of UYT is that Line is also of great significance to TKNT. With the maturity of UYT ecology and technology, TKNT can have a more powerful performance. If TKNT wants to link more public chains, it needs to access UYT network, and realize a bigger vision with cross-chain interaction of UYT. After TKNT was launched on the exchange, the highest price has risen to $14, and now it has dropped to about $2.50. You will see that it will once again set a record high and create greater miracles. You will also see that $3 will be the best buying point for TKNT, because there will be several major moves in TKNT, and the global MLM plan will be launched on October 7 in Korea, China, and other countries There will be many marketing teams in Europe to promote TKNT, including DAPP.com As a shareholder of TkN, TKNT will also make every effort to promote TKNT. Secondly, TKNT will be launched next month on the largest digital currency exchange in South Korea, and Chinese users will see the shadow of TKNT on Binance in November. Of course, the decentralized trading platform of UYT will also be launched in the future.
Q8. What is the significance of the launch of UYT’s main network for the industry and ecology?
Answer: UYT is one of the few cross-chain platform projects in the industry at present.
There are many public chains and coin issuing projects. Why? Because of less work, more money. However, there are very high technical and capital requirements for cross-chain and platform. This barrier is very high, so almost no project side is willing to do this. But once this is done, it will be of great significance to the whole industry of digital currency and blockchain.
Because it will subvert the current situation of the whole currency circle and chain circle acting on their own, and the painting land is king. Let each independent ecosystem achieve a truly decentralized and trust-free cooperative relationship. This huge change will promote the whole industry to develop into a healthy and virtuous circle macro ecosystem.
Q9. The slogan of many project supporters is that UYT should surpass Ethereum. What is the difference in technology between UYT network and Ethereum?
Answer: Thank you so much for supporting UYT. In fact, the correct understanding is that UYT is the next era of Ethereum. First of all, UYT has a different vision from Ethereum.
Before the emergence of UYT, Ethereum, and EOS, no matter how well they developed, belonged to the era of a single chain. The popular metaphor is a LAN. However, UYT can realize the interoperability of each chain and bring the blockchain into the Internet era. Secondly, UYT is far superior to Ethereum in technology. It mainly includes three aspects: shared security, heterogeneous cross-chain, and no fork upgrade.
In the case that Ethereum 2.0 has not been implemented, UYT is the most friendly bottom layer for the DFI projects and other Dapps on Ethereum. Now, the hair chain architecture substrate of UYT is compatible with Ethereum smart contract language solidity, so eth developers can easily migrate their smart contracts to UYT.
Up to now, there is no good solution to the congestion problem of Ethereum, while UYT network not only solves the network congestion problem. What’s more, UYT can easily realize one-click online upgrade, instead of having to redeploy a set of contracts on Ethereum for each version upgraded and then require users to follow them to migrate the original assets from the old contract to the new contract. Developers can quickly and flexibly iterate their own protocols to change their application solutions according to the situation, so as to serve more users and solve more problems. At the same time, they can also repair the loopholes in the contract very quickly. In the case of hacker attacks, they can also solve the hacker stealing money and a series of other problems through parallel chain management. We can find that for Ethereum, UYT not only solves the congestion problem we see in front of us but also provides the most important infrastructure for the future applications such as DFI on Ethereum to truly mature into an open financial application that can serve all people. It also opens the Web 3.0 era of the blockchain industry. In terms of market value, Ethereum currently has a strong ecological construction, with a market value of US $40 billion. UYT will also focus on the development of this aspect after the main network goes online. No matter in terms of market value or ecological construction, I have enough confidence in UYT, after all, we are fully prepared.
Q10. What is the progress of the ecological construction of UYT? What opportunities do current ecological partners see in UYT or what changes may be brought about by UYT ecology?
Answer: After the main network of UYT goes online, there will be a series of ecological construction actions, and more attention will be paid to establishing contact with traditional partners. Cross-chain decentralized flash cash, cross-chain asset support, cross-chain decentralized lending, Oracle machine, and other products will also be the key cooperation direction of UYT.
UYT will give priority to the game and entertainment industry because this industry is most easily subverted by blockchain. As the ecological construction of UYT gets bigger and bigger, the future slots will become more and more expensive. The earlier you join UYT ecology, you will get more support from the ecological fund because the ecological fund is also limited. From the perspective of token value-added, all the project parties will cooperate with the project side in the future, and the project side needs to pledge a certain number of UYT to bid for slots, except for ecological rewards, others need to be purchased from market transactions.
The difference between the pledge here and the pledge we understand is that the UYT of the ecological partner participating in the auction pledge cannot enjoy the computing power for mining.
UYT main network has several opportunities for Eco partners to look forward to, the first point is bitcoin, bitcoin will be later than other assets late, but eventually, all the bubble and value will return to BTC, after the wave of DeFi bubble elimination, the focus will be very much in the bitcoin. UYT ecology can provide a more mature bottom layer for defi. In addition, now Ethereum’s DEFI is that of Ethereum and ERC 20 tokens, and the outbreak point of bitcoin has not yet arrived. Therefore, the DEFI of UYT ecology may be the next opportunity, which is a good opportunity for everyone.
The second opportunity is that after the main network goes online, the future UYT ecological projects will compete to bid for slots. In fact, the original intention of UYT is to realize the interconnection of all chains. The chain outside the UYT ecology also needs to communicate. The third is cross-fi. The BIFI is hatched on Ethereum, and the def on UYT can realize multi-chain operation. For example, TkN games or future UTC game platform users can call bitcoin on the UYT chain. This form only belongs to the decentralized finance in the cross-chain era of UYT, which can be called cross-fi.
Q11. Which exchanges will UYT go online next? What is the online strategy like?
Answer: As the founder of ignisvc and as UYT As the head of the Dao organization, we have always had good cooperative relations with major exchanges all over the world. TKNT will appear in several exchanges one after another. Hitbtc exchange in the United Kingdom, Upbit and Bithumb Exchange in South Korea, Bitfinex exchange in the United States, Binance exchange in China, BKEX exchange, and Kucoin exchange in China are all our partners, and they have been paying close attention to UYT Development, UYT is the public chain with the largest user base and the highest community participation in the cross-chain field, so the future value is immeasurable. If we have to go to the exchange, then we will choose one of the above exchanges to launch. But the vision of UYT is to create a fairer, safer, and transparent circulation in the field of digital currency, and users can master all the assets by themselves, Therefore, in the beginning, there is a simple DEX on the UYT wallet, which is a simple matchmaking transaction and is also an on-chain transaction. After the completion of the UYT DEX, more transactions may occur in the UYT DEX.
However, after the main network of UYT is online, centralized exchanges can directly access the block data synchronization of UYT, and it is not ruled out that some exchanges will directly go online for UYT trading. Such exchanges will not enjoy the support of the ecological support fund of UYT. The network project is a community-led project. Each cooperation plan of the exchange will be carried out in the way shared by the community in the future. Dao organization can only implement it according to the voting results.
Q12. What are the plans for the promotion of ecological development and market by the launch of UYT main network?
Answer: The launch of the main network will be completed around October 15.
On the offline side, due to the epidemic situation, we will jointly organize corresponding market activities with nodes in different countries. At present, there are three large-scale offline meetups that have been identified. We will also start a global roadshow when the epidemic is over.
On the online side, we have opened online Wechat, Kakao, Twitter, Reddit, and telegram communities. We will carry out AMA activities in various countries and promote them all over the world in various ways. Of course, we will launch MLM plans and cooperate with more marketing teams.
submitted by tkntfoundation to u/tkntfoundation [link] [comments]

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

Streamr Network: Performance and Scalability Whitepaper


https://preview.redd.it/bstqyn43x4j51.png?width=2600&format=png&auto=webp&s=81683ca6303ab84ab898c096345464111d674ee5
The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.
The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.
Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.

The latency vs. bandwidth tradeoff

The current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.
Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.
There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.

Network diameter scales logarithmically

One useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.

Network diameter
We can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:
Network diameter in network of 100 000 nodes
We can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:

Number of duplicates received by the non-publisher nodes
In the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:
  • The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).
  • A node degree of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.
It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.

Latency scales logarithmically

To see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.
The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.
CDF of message propagation delay
From this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.
Then we tackled the big question: does the latency behave logarithmically?
Mean message propagation delay in Amazon experiments
Above, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.
The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.
It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.

Relative delay penalty in Amazon experiments
As you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.
Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.

Latency is predictable

It’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.
So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:
Mean message propagation delay in Amazon experiments
We can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.

Conclusion

The research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.
It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct connecion, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.
In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.
Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.
I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.
If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email [[email protected]](mailto:[email protected]).
Originally published by. Henri at blog.streamr.network on August 24, 2020.
submitted by thamilton5 to streamr [link] [comments]

Network effect doesn't explain Bitcoin dominance : technical arguments for Bitcoin Maximalism

I often saw some Bitcoiners counter the altcoin shillers with the following argument: "Bitcoin is the first so it has the network effect, so your altcoin will always worth less"
This is not totally wrong but I think this argument is bad. Using it implies implicitly that you admit that Bitcoin is technically less capable than altcoins with their higher transaction rate, fast blocks, instant confirmation, without fees and smart contract support....

And this is WRONG ! Bitcoin price is higher, because Bitcoin is already TECHNICALLY BETTER than any other altcoin and not because of network effect. In 2017, Bitcoin dominance was lower, some altcoin reaches almost the same level than Bitcoin. So network effect is not what prevents an altcoin to take the lead but it is the fact that almost all altcoin that pretend to be better than Bitcoin are technically flawed.
Why ? They use bigger blocks, DAGs (Directed Acyclic Graphs, like used in IOTA or Nano) or wtf they created to disturbe you and make you believe they are better than the good old Bitcoin blockchain. But they have at least one of these two weak points: validation time oand bandwidth.

If they use biggefaster blocks, validation time of block is higher and so block propagation is slower. Less nodes can operate as they may not be able to validate blocks fast enough to keep up with the tip of the chain. You end up like Ethereum that has less and less full verification nodes leading to centralization of the network.
If they use DAGs, they achieve consensus through a "proof-of-stake" vote and always at the cost of bandwidth (quadratic cost in number of node, linear in transaction rate). DAGs are often presented as "each node has its own ledger" but the reality is that the only global ledger you should trust is the complete DAG of the ledgers of each node. Only servers with a shitload of download bandwidth can maintain it as debunked here and here. To not look (in fact be) centralized, some DAG altcoins doesn't, opening the door to history rewrite by buying account that owned a big stake before and still use a lot of bandwidth to reach consensus. Bitcoin just add 80 bytes of proof-of-work data on a block of 1-2MB to achieve it and protect from history rewrite, almost zero cost for the network (that's why we pay fees to miners who carry the cost).
You only have those problems when it reaches a critical level of usage and only then we can see how much those limits where ignored. Until now, very few altcoins reached the limit (maybe Ethereum, and IOTA shows us it is centralized already by stopping the network)
Bitcoin has the highest price because Bitcoin is technically the only possible decentralized king (and a centralized cryptocurrency worths nothing). Yes maybe 1MB limit was too conservativ, yes fees are higher, yes 10 minutes is slow. But if everyone wants their coffee payment to be settled onchain, 1MB or 8MB or 32MB each 10 minutes, minutes or secondes will never be enough while it is a big difference for the network health. You only need the blockchain for settlement, for payment you have Lightning Network that can already destroy PayPal's transaction rate.
So next time you feel convinced by an altcoin that pretend to be better than Bitcoin while being decentralized by design, evaluate the requirement to be a fully validating node and what the overhead of bandwidth needed to achieve consensus. The chances are high that they didn't take care of one of these two issues and end up centralized or broken. You don't need the network effect argument.
submitted by Pantamis to Bitcoin [link] [comments]

"Bitfury study estimated that 8mb blocks would exclude 95% of existing nodes within 6 months." - Tuur Demeester

submitted by qubeqube to Bitcoin [link] [comments]

It looks block propagation on Bitcoin's network has hit an efficiency floor of 1-2 seconds. Delay may no longer be the scaling bottleneck, but we'll have to see how this holds up when blocks are consistently full to get a better measurement.

It looks block propagation on Bitcoin's network has hit an efficiency floor of 1-2 seconds. Delay may no longer be the scaling bottleneck, but we'll have to see how this holds up when blocks are consistently full to get a better measurement. submitted by StopAndDecrypt to Bitcoin [link] [comments]

DFINITY Research Report

DFINITY Research Report
Author: Gamals Ahmed, CoinEx Business Ambassador
ABSTRACT
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks.
DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.

1.INTRODUCTION

DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%.
DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto.
Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum.
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients).
DFINITY’s consensus mechanism has four layers

Figure1: DFINITY’s consensus mechanism layers
1. Identity layer:
Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer.
2. Random Beacon layer:
Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity.

https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111
3. Blockchain layer:
The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work.
The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts.
4. Notarization layer:
Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer.
DFINITY achieves its high speed and short block times exactly because notarization is not full consensus.
DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret.
DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee

1.1 OVERVIEW ABOUT DFINITY

DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”.
DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland.
DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts.
DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems.
DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas
DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model.
Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million.
In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents.
DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.

1.1.1 DFINITY FUTURE

  • DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
  • DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
  • The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
  • Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
  • Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.

1.1.2 DFINITY’S VISION

DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.

1.2 DFINITY COMMUNITY

The DFINITY community brings people and organizations together to learn and collaborate on products that help steward the next-generation of internet software and services. The Internet Computer allows developers to take on the monopolization of the internet, and return the internet back to its free and open roots. We’re committed to connecting those who believe the same through our events, content, and discussions.

https://preview.redd.it/0zv64fzf05e51.png?width=637&format=png&auto=webp&s=e2b17365fae3c679a32431062d8e3c00a57673cf

1.3 DFINITY ROADMAP (TIMELINE) February 15, 2017

February 15, 2017
Ethereum based community seed round raises 4M Swiss francs (CHF)
The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017.
February 8, 2018
Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz
The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon.
August 2018
Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel.
January 23, 2020
Dfinity launches an open source platform aimed at the social networking giants

2.DFINITY TECHNOLOGY

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.
At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks).
A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment.
Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.
DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are
not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript).
Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.

2.1 DFINITY CORE APPLICATIONS

The DFINITY cloud has two core applications:
  1. Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
  2. Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services.
Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption.
DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network.
In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.”
As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated.
DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.

2.2 THRESHOLD RELAY

As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance.
One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model.
“The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.”
Threshold Relay consists of four layers (As mentioned previously):
  1. Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
  2. Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
  3. Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
  4. Identity layer that provides a registry of all clients.

2.2.1 HOW DOES THRESHOLD RELAY WORK?

Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message
individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

2.3 DFINITY TOKEN

The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
  1. Fuel for deploying and running smart contracts.
  2. Security deposits (i.e. staking) that enable participation in the BNS governance system.
  3. Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem.
Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.

2.4 SCALABILITY

DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.

2.5 DFINITY CONSENSUS ALGORITHM

The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization.
Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today.
The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design.
For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations.
The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication.
DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.

2.6 LINKEDUP

The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity.
LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services.
The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building.
Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers.
Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built.
“Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”.
The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer.
“The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.”
Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits.
On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
  1. On boarding a global network of independent data centers.
  2. Fully tested economic system.
  3. Fully tested Network Nervous Systems for configuration and upgrades

2.7 WHAT IS MOTOKO?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.
Full article
submitted by CoinEx_Institution to u/CoinEx_Institution [link] [comments]

BTCPay Deployment - One Click Setup via Azure What is a Bitcoin Node? - Step by Step Explanation - YouTube Earn BitCoin Fast And Easy 2020, Earn Bitcoins Testing Lightning on Litecoin with eclair PSA: Bitcoin Has Been Hijacked

Running a Bitcoin full node comes with certain costs and can expose you to certain risks. This section will explain those costs and risks so you can decide whether you’re able to help the network. Special Cases. Miners, businesses, and privacy-conscious users rely on particular behavior from the full nodes they use, so they will often run their own full nodes and take special safety ... I just bought a Raspberry Pi to run a bitcoin node exclusively. I naively thought that the only bandwidth that will be used is when the node download new blocks (so around 50-60GB per month). My ISP will definitely throttle or block my connection if I use 450GB per month only for bitcoin (on top of my existing bandwidth) Change LIMIT to be the maximum bandwidth you want Bitcoin Core to use (I chose 1mbit). If you don’t have any other Bitcoin Core nodes in your local network, you can delete the line that says LOCALNET. This line is there to make a bandwidth exception for port 8333 communications within your local network (i.e. not out to the internet). This is roughly the upper limit for the number of wallets that are online and connected to the Bitcoin network at any one time. (If there were more people online at once than that, people would start seeing various issues.) This doesn't include wallets that don't actually connect to the Bitcoin network, of course. If I look at my long-running listening full node, I currently have incoming ... What’s a Bitcoin full node? The Bitcoin network is a collection of computers all over the world running the Bitcoin Core software that verifies transactions and blocks. It’s the distribution of these “nodes” (the term for a computer attached to the network) and the fact that anyone can set one up that makes Bitcoin “decentralized.”

[index] [26233] [20456] [34347] [11662] [6174] [23376] [40835] [683] [51307] [23298]

BTCPay Deployment - One Click Setup via Azure

Quick install of Full Bitcoin MAINNET Eclair Lightning Network Node on ... Tutorial to Setup Bitcoin Lightning Node on Mac - Duration: 7:41. The Bitcoin Rabbi 4,807 views. 7:41. Eclair 0.2-alpha1 ... As a result, Bitcoin is not controlled by a state, but rather by its users and the parties who secure the networks (nodes & miners). This model makes digital currencies a panic-proof investment ... If Bitcoin catches on on a big scale, it may already be the case by that time. Another way they can become more practical is if I implement client-only mode and the number of network nodes ... ₿ Bitcoin & Altcoin Wallets plugin (w/ Bitcoin core full node): How to install plugin into WordPress - Duration: 9:52. dashed-slug.net 22,530 views This video will go into more details what a node in blockchain actually is. Full Nodes with 150+ GB: – Bitcoin Core Client itself: https://bitcoin.org/en/dow...

#