Blockchain Learning, Proof of Stake, Uncategorized

Why We’re Building Casimir

Note: This started out as an internal document, reconfirming our priorities and values as we watched the First Contagion, the collapse of Luna, in early 2022. Today, we’re watching the Second Contagion and the collapse of  one of the over-leveraged wildcat banks we allude to below. Although we didn’t predict the collapse of FTX, nor are we trying to predict what’s next, we continue to operate by our principles first approach. This approach means working on solutions for our core principles of self custody and peer-to-peer transactions in the Web3 space. It’s why we have never worked with or used FTX before and why our current roadmap remains unchanged. What the FTX collapse has done for us, though, is affirm our approach and hasten our desire to bring about vast improvements to the UI/UX problems that create a barrier to self custody for many users. We hope the following will give insight as to why we’re taking the approach we are and we hope you’ll join us on our journey to making Web3 better, easier, and safer for all.

It’s always a good idea to revisit your priorities and values especially in this time of uncertainty in blockchain, cryptocurrency, Web3, and technology at large. We’ve written before about what the value proposition of blockchain is and why we’re building technology as close to the consensus layer as possible. Fundamentally, the consensus mechanism is what powers a new medium; exchange of value without the need for a third party, peer-to-peer transactions. It’s clear that many of the current issues in the Web3 space today are due to egregious speculative activity and attempted takeovers from centralized entities acting far away from the consensus layer. 

We’ll briefly touch on some recent issues in Web3, the major reasons we think they occurred, and why we’re building Casimir to help fix this.

Bridge Attacks – In February 2022, the DeFi platform Wormhole was exploited for $325 million. Wormhole was a popular VC backed blockchain bridge designed to allow users to access tokens across chains using a single access point. More recently the Binance Smart Chain was exploited for $100M+. While bridges are a potentially convenient solution to the mass of protocols in existence, a single smart contract or hot wallet with $100M+ of deposited tokens is proving to be too attractive of a target for hackers. So far in 2022, over $2B worth of tokens on bridges have been hacked!

Decentralized in Name Only – The first of the warning bells of the impending 2022 cryptocurrency sell-off was the collapse of Terra. There are a range of reasons why Terra collapsed but simply, algorithmic stable coins backed by digital assets have fundamental challenges due to the volatile nature of digital assets. This early breakdown from Staking Rewards names a combination of an overreliance on the yield platform Anchor combined with significant off-chain usage on exchanges being a driving factor in the collapse of Terra. Those externalities, controlled by central entities, effectively subverted the consensus mechanism of the project by operating off-chain where overleveraged risk could not be observed. Additional issues were caused by a concentration of premined tokens in the hands of Terraform Labs who essentially controlled protocol voting and overrode the desires of some in the community to reduce risks. A more recent postmortem in June 2022 showed that the liquidity issues and subsequent depegging of the UST stable coin were caused by Terraform Labs themselves.

The Rise and Fall of CeDeFi – Next to fall, and still unwinding, is the “Centralized Decentralized Finance” (CeDeFi) company Celsius. Companies like Celsius and BlockFi have driven huge growth in Web3 by offering high interest rate yields on your deposited tokens. They act as a bank but don’t do a good job of indicating the potential risk their depositors face nor do they follow the same regulations as traditional banks. Celsius was exposed to Terra and potentially lost $500M there alone. More recent are revelations that Celsius executives cashed out just prior to the collapse and bankruptcy filing.

Last of the “(first) contagion” was the collapse of Three Arrows Capital. Ongoing investigations are looking at whether 3AC took large margin longs on cryptcurrencies through fraudulent activity and then  were subsequently liquidated over the past month of pullbacks. Overall, it sounds pretty bad for 3AC management and they might be going to jail.

The unifying thread of these major collapses was the concentration of digital assets and their control into single points of failure. Even worse, the users themselves were in the dark, unaware of what was occurring with little visibility into the behind-the-scenes actions of those companies. What the latest round of speculative growth in Web3 was built around was, in short, unsustainable, over-leveraged, unregulated, wildcat banking, totally divorced from the core ideas of a decentralized currency. This mentality has unfortunately not changed since the beginning of the year and more liquidity crises are not out of the question.

Unfortunately, all of these problems were intentionally created (not the fallout of course); many players in the Web3 ecosystem today are attempting to rebuild traditional business models around SaaS and fee extractional models by creating layers of complexity that separate users from the core Web3 value proposition: Peer-to Peer-transactions.

While the 2022 drawback in Web3 did a lot to refocus the industry on its core principles, there are still growing centralization and regulatory concerns:

Ethereum Merge – Ethereum 2.0 staking is currently heavily concentrated among major cryptocurrency exchanges and the Lido Pool. So far, just two centralized staking providers, Coinbase and Lido, have mined almost 50% of Ethereum blocks post merge. Control of cryptocurrencies by “banks” (Coinbase, Kraken, BlockFi, FTX, etc) presents a threat to the uncensorable features of the Ethereum blockchain. With control of the Ethereum blockchain and operating under U.S. regulatory policies, these entities must implement any and all controls as required by law. What this means is that cryptocurrencies would effectively become fiat currencies – implemented by decree from the state.

If we are to avoid this scenario we must help create a truly decentralized ecosystem where a few centralized entities can’t control the Consensus mechanism of a Web3 protocol. We need native Web3 solutions – peer to peer, decentralized solutions and tools that empower the users, not centralized market makers. We’re building Casimir to do just that.

Decentralization – Probably the most overused and watered down word in the space is “decentralized.” Nearly everything in blockchain/web3 is called decentralized, whether or not it actually is. The unfortunate reality is that blockchains are decentralized in name only. A recent study by Trail of Bits for DARPA concludes blockchains are fairly centralized. They report that the pooled mining for Bitcoin gives a Nakamoto coefficient of 4 to Bitcoin and Proof of Stake protocols aren’t much better. I won’t get into criticism of the overall piece by Trail of Bits, particularly the misassociation of pools and protocol control for Bitcoin, but the Nakamoto Coefficient for Proof of Stake is worth analyzing. Chris Remus of Chainflow has written extensively on Staking Decentralization and currently maintains a live Nakamoto Coefficient tracker that predates the Trail of Bits report. The Nakamoto coefficient is a measure of decentralization and, by definition, the number of nodes needed to control the Consensus mechanism of the protocol. The lower the number, the less decentralized. At the time of this writing, some major protocols have very low Nakamoto Coefficients, of note Polygon is at 3.

The goal of Proof of Stake protocols should be to get the highest Nakamoto Coefficient number possible, which would make it very difficult to manipulate the protocol since it would require simultaneous compromisation of hundreds of nodes. For example, Cosmos has an active set of validators of 150, around the world. Compromising all of them would be likely impossible, however the Nakamoto Coefficient of Cosmos, is only 7, meaning that to control the Consensus mechanism of Cosmos would only take a compromise of the top 7 Cosmos validators. A tough job to be sure, but a lot easier than the 150 total active validators in the Cosmos ecosystem.

What this means in practice is that the allocation of staked tokens should be spread across all validators as equally as possible, not continually concentrated in a few of the already heavily staked validators.

So why are the Nakamoto coefficients so low? Let’s talk about the User Experience

The Crypto Experience

User experience

The Web3 user experience today… sucks. You’re forced to either leave significant returns on the table and surrender control of your assets to a major exchange; or, endure the inconvenience of manually staking across multiple protocols, wallets, platforms, and websites. It’s harder to know what’s going on and it becomes easier to get scammed through faulty or malicious smart contracts. 

What Web3 Looks Like Today

The easiest way to manage multiple digital tokens and assets is through centralized exchanges like Coinbase, which leave a lot to be desired. You give up custody of your tokens and if you’re staking, you’re missing out on potential rewards that Coinbase scoops up in the form of third party fees. If you’re more adventurous, you may have multiple wallets and multiple staking websites you use. You have the benefits of self custody but are forced to go through the process of managing the wide range of websites and wallets you have to interact with the various protocols. It becomes confusing to manage and monitor all of your stuff and there aren’t any good solutions today that help you compile everything.

What’s more, current Web3 non-custodial products, like MetaMask, fall far short of protecting users from scams or interacting with bad smart contracts. Because cryptocurrencies are so difficult to interact with and understand, even seasoned pros get manipulated and hacked.

How MetaMask Responds to UI Criticism

Let’s look at how this poor user experience even affects the Consensus mechanisms of PoS protocols. One of the easiest ways to stake in the Cosmos Ecosystem is using Keplr, a mobile/web wallet that allows you to stake to any of the Tendermint based protocols. However, users trying to stake with Keplr aren’t given much to work with. 

The Staking Page for Cosmos

A new Staker has no way of deciding who to stake to. There are no easy ways of determining whether an above listed validator is reliable or participating in the governance of a protocol. Users have no real reason to choose a validator outside of the top ten, because there are no tools to sort and research each individual validator. So, people end up picking validators from the top of the list due to the appearance of quality. We can see this effect in the Nakamoto Coefficient of Cosmos today, which is 7. What’s more, two of the top five Validators for Cosmos are cryptocurrency exchanges. In Proof of Stake today, cryptocurrency exchanges have an outsized impact on the consensus mechanism of proof of stake protocols.

So, we’re left where we started. Exchanges offer the best user experience and are gaining control over Proof of Stake protocols. Since exchanges are likely to be regulated more like banks in the future, we are looking at a future where Proof of Stake is controlled by banks. What this means is that they control consensus. They can censor accounts, users, or transactions that they don’t like or are told to by the government. That’s a fundamental threat to the idea of decentralization and Web3 as a whole – an uncensorable digital currency.

Our conclusion is that a poor user experience is driving centralization and will continue to lead to major single point of failures like Celsius unless we create tools that allow users to take full advantage of the protocols they use.

How we’re building Casimir

First, we reexamined how Web3 is being built today. It’s been often stated that Web3 is “going to be just like the internet”. It’s certainly true that there may be some parallels in growth trajectory and societal impact; however, for many projects in the space today, “just like the internet” means being built using today’s internet: AWS/Google Cloud, numerous HTML websites, and centralized SaaS powerhouses. With Casimir, we want to break the paradigm of today’s Web3 and reexamine how users interact with and use blockchains, digital value, and Web3 overall.

We are getting off the Web 2.0 rails and building something new, a native Web3 experience that prioritizes decentralization, user experience, and user control. We’re building the first true Web3 portal, capable of integrating with any wallet, any blockchain, and any token, allowing users to easily navigate Web3 and interact with the protocols directly, not through a centralized exchange or a variety of unconnected websites.

How We’re Designing Casimir

Improving the User Experience through Decentralization

We’re starting bottom up. Unlike current UIs, designed with traditional Web2 architectures, we’re starting at the Consensus and Infrastructure layers of Web3. These layers of decentralized node infrastructure providers hold fully indexed blockchain databases, provide APIs for querying, a network of worldwide decentralized nodes for consistent uptime, and build blocks of transactions as they are added to the blockchain. Today, most users are forced through third parties to access blockchains, which introduces extra costs for transactions and token management. By accessing these nodes directly, users are assured of uptime, uncensorable and low cost transactions, and minimized fees taken by the normal third party intermediaries. Also, with the right tools, users can access on-chain analytics and other information that these nodes carry. This information can protect users by providing transparency to the entities they’re interacting with as well as information about smart contracts and other on-chain data. Today there simply aren’t good enough tools to make on-chain information available and usable to the everyday user.

There are 3 key areas we’re focusing on as we design Casimir: Usability, Security, and Transparency.

Usability: Similar to a Mint or Personal Capital, it will be a place where users can aggregate their digital currencies and assets, for an easy place to manage what they have across the various protocols they use. Many Web3 users have multiple wallets and assets from a variety of protocols, so a single location for them to better manage and view their assets is much needed without it being a single point of failure for any stakeholder.  With our multi-chain approach and Multiwallet Connect we can effectively be an interoperability solution without the bridge.

Casimir will do more than just a Mint, however, it will allow users to interact with their chosen protocols, accessing mints and air-drops, Stake and manage their digital currencies across protocols beyond ethereum, and access specialized tooling that helps protect users. We’ll build and continue to add features like this that help users use Web3.

Our business model isn’t built around trading or exchange fees. Unlike an exchange, we’re not front running trades or building in hidden custodial fees. Our base product will always be free to use and we’ll make money by offering a premium subscriber product as well as through our infrastructure services. We believe you’ll not only have a better user experience, but you’ll actually save money as well.

Security: Unlike most centralized exchanges and custodians, we will never take custody of user’s wallets or tokens. This means we are able to leverage existing on-chain security to protect users at a much higher level. It also means we will never be worried about liquidity or will be trading a user’s tokens on the backend. Although Casimir will be a single site, it won’t be a single point of failure. The code is open source and we will never take custody of user’s digital tokens or NFTs. If Casimir goes away tomorrow, no funds will disappear and users will still have access to all of their tokens.

Unlike traditional Web2, we’re not building around user account management, user analytics, and productizing the user. We’ll never ask for a user’s email address and build an internal profile because not only does this create a security vulnerability for our users, it’s also unnecessary. Our users will always be able to login through their wallet which means they will always control their login credentials.

As part of our usability effort we’re building a smart contract analyzer for users to know what their interaction with a smart contract will *actually* do and monitor the smart contracts they’ve given permissions to and control permissions on old contracts. Because we are working at the protocol level, we are able to provide users with real time information and on chain analytics to help users make the best decisions with their digital assets.

Transparency: As the name indicates, every on chain action on a public blockchain is publicly accessible. Every wallet, every transaction. This transparency is unique in financial systems where the books of banks or governments are not available to everyday users. Today, many Web3 financial providers continue to hide behind their proprietary systems and their financial solvency is only available to the select. What’s worse is that these companies (in the US at least) often skirt regulation loopholes to avoid the same auditory requirements banks have. 

In a world where Bitcoin was launched in the face of a banking crisis, with a desire to bring about a new and transparent financial system to the world, the actions of many major players in the space today are in direct opposition to the values of Web3.

Casimir will help change this. We leverage our fully indexed chains to provide transparency analytics to our users. While block explorers and address balances are always available to those who know how and where to look, we’re making it easier for users to interact with and use Web3 indexed information. We’ll allow users to easily sort data, see large and identified wallets, track large transactions, and match wallets with organizations so that proof of reserves can be ensured.

We’re here to create a better Web3 user experience. For us that means enabling users to better use the decentralized capabilities of the space, not to trade better in the crypto casino. We’ve got a long way ahead of us, both to build something better but also to help users learn the importance of self-custody and what decentralization truly means.  

Over the next few months we’ll present some of the specific technology developments we’re working on to help achieve our goals including non-custodial Ethereum Staking, cross-chain wallet integrations, and a cross-chain single sign on. You can follow our progress on github and join us on Discord. We’re striving to create an open ecosystem that empowers the user and we hope you’ll join us.

Blockchain Learning, IoT, Proof of Stake

Our Response to NIST

The National Institutes of Standards and Technology requested information for a congressional study regarding emerging marketplace trends in a variety of areas including blockchain and IoT. Read our response below as we discuss applications of Decentralized Computing, how consensus works, and what’s next.

Abstract

This paper contends the following: Blockchain as a ledger or immutable database alone is a limited technology with no real use cases. The true value of the technology is apparent through the consensus mechanisms which enable the real value proposition, namely digital value and distributed peer-to-peer trust. This paper outlines why consensus is the true innovation in the blockchain space, some methods of consensus, and applied use cases. Applied use cases discussed: digital tokens; defense applications and distributed trust; IoT and distributed data exchange in industry including energy.  It is important that this reviewing body considers the implications of such a technology in order to create smart policy. Lastly, we encourage regulatory frameworks in the space to focus on enabling technology growth on public, decentralized networks where entrepreneurs can be free to build and innovate. 

Introduction

We believe the significance of “Blockchain technology” is the creation of new value models, consensus mechanisms, and tokenization itself, not the “blockchain” technology. Blockchain technology is a broad, oft misused term that people use to describe many things and frequently just an immutable database. An immutable database, while having limited use cases in regulatory and legal cases, is not a revolutionary technology in and of itself.  While it’s important to distinguish between much of the current sensational, speculative hype surrounding cryptocurrencies and their derivatives like NFTs, we must also contrast enterprise efforts focused on Blockchain as a limited database technology. Cutting through the confusion, we will look towards the core capabilities of the technology and where this will lead to a future of digital value exchange.

The true innovation and revolutionary technology are the consensus mechanisms of these Decentralized Computing platforms powering cryptocurrencies. Similar to the computer enabling machines to create and store data or the internet allowing machines to exchange data, Decentralized Computing enables machines to create, transact, and store value. This mechanism enables trust between independent users around the globe, who do not need to know each other or even utilize a third party to moderate their interaction. Value exchange has been democratized for the first time in human history, enabling all users to control their own credit, without reliance on a third party like a bank, government, or temple to set exchange rates and mediate value exchange.

This is a fundamental change in the way we will interact with currency and value in our lives. In much the same way past revolutions displaced the main players in the industries they disrupted (The Industrial, and the Digital/Internet revolution) so too will Decentralized Computing continue to disrupt established banking and financial institutions and even beyond, to the centralized enterprises who currently control information flow on the internet.

We believe very clearly that the future of “blockchain” or Decentralized Computing will be built on open, decentralized public protocols like Bitcoin and Ethereum for the reasons outlined in this paper. It is vitally important that innovation, use cases, and regulation in this space are designed with this in mind. In short, we believe public, open, decentralized computing networks utilizing Proof of Work, Proof of Stake, or another yet designed mechanism will win out due to their unique incentive structures that draw in a wide range of users. Combined with their open architecture which allows anyone to build, public networks will create a myriad of use cases even beyond those contained below. It is important that builders on these free and open systems be enabled to innovate and create in order to maximize the potential of decentralized computing technology.

Overview of Consensus

Briefly, a consensus mechanism is designed to enable networked computers to agree over (1) a set of data, (2) modifications to or computations with that data, and (3) the rules that govern that data storage and computation. It’s consensus that allows a cryptocurrency like Bitcoin to exist, not “blockchain technology.” For example, in the Bitcoin network, there are a set of rules that govern the inflation and maximum amount of Bitcoin that can exist (21 million). These rules are agreed to by network participants and over time, trust is built up that the Bitcoin tokens will continue to conform to the rules of the network. It is this part of the distributed consensus mechanism that allows Bitcoin to have and retain value. Consider the US dollar, there is no limit on the number of dollars that can exist, however, the strength of the US economy, military, and government creates a trusted environment where the dollar can have value, even though it is a ‘fiat’ currency. Because Bitcoin doesn’t have a central government or physical assets, trust and value must be built and maintained through the consensus mechanism. To this end, Bitcoin created a unique incentive structure as part of its Proof of Work consensus (to be explained later) that made participants in the network want to conform to and maintain the rules so that value could grow. 

This network security is also what allows an open and permissionless network. Anyone can build on these networks without fear that that person could destroy the network and this is partially what makes this technology so powerful. When the openness is removed and the network becomes a Consortium consensus model or private, the ability to create is limited and becomes mediated through the controlling enterprises on the network.

There are several Consensus Mechanisms in use today: Proof of Work, Proof of Stake, Consortium, and Proof of Authority. These mechanisms provide the backbone for Decentralized Computing networks and we’ll see that the open mechanisms, Proof of Work and Stake, do far more than simply act as a “blockchain ledger.”

Proof of Work

Proof of Work is the original consensus mechanism designed by the pseudo-anonymous Satoshi Nakamoto. It is quite simple: Network participants use their computers to solve a math problem (or algorithm), the first computer to solve the problem gets rewarded in Bitcoin and the ability to process a number of transactions, earning those fees as well. This has been occurring basically non-stop since the launch of Bitcoin in 2009. The mechanism is designed so that the algorithm changes in difficulty as more computers are added to the network so that a Bitcoin block (and reward) is mined every 10 minutes. The reward from Proof of Work makes for a powerful incentive. Computers participating, especially today as the amount of computing power on the network has grown exponentially, spend quite a bit of money to purchase, maintain, and run their equipment. They must ensure the rules of the network are maintained so that they can continue to earn money from transaction fees and block rewards. Someone looking to disrupt the network would need to deploy an incredibly expensive amount of computing power to manipulate blocks or transactions, not worth it for any individual or company to do. This methodology has proven very successful though there are two major concerns. First, is the amount of electricity required to maintain the network. The answer is outside the scope of this paper, however, thus far, predictions of future energy usage have been quite incorrect and there is no indication that Bitcoin is generating demand for new energy sources, only utilizing excess energy in specific areas. The second concern is the ability of any new protocols to utilize a new Proof of Work network. Since there is so much computing power involved in the cryptocurrency space today, it is easy for an attacker to disrupt and destroy smaller proof of work networks.

Proof of Stake

There are two other key limitations of Bitcoin that drove innovation in new methodologies of consensus that resulted in Proof of Stake. The first is that Bitcoin is stateless or non-Turing complete. This means that it can’t do much else other than managing the transfer and store of Bitcoins. There isn’t a problem with this, per se, but innovators were interested in ways to build an “internet computer” a decentralized computer with built-in payment rails that would enable users to create a new generation of computer programs and new types of digital assets all natively decentralized and uncensorable (Streaming platforms, social media, music, art, etc). This, in part, required a new methodology to overcome the second limitation of Bitcoin, the speed of the network (Bitcoin blocks only come once every 10 minutes), while still maintaining the overall security of the public network. Proof of Stake emerged as the most likely solution and works as follows. Instead of rewarding computers based on the computing power, they are utilizing, Proof of Stake rewards all participants for the number of network tokens they have “staked” or deposited on the network. Participants can choose from a variety of network computers, or Validators, to stake to and desire to choose a Validator who will properly secure and maintain the network, otherwise their tokens are worthless. This incentive structure has so far proven to be a worthy competitor to Bitcoin and most new network launches (Cosmos, Solana, Polkadot, and others) have utilized a form of Proof of Stake. The Ethereum network, the original Turing-complete decentralized network, is planning to shift from Proof of Work to Proof of Stake later this year (2022). 

There are two major concerns with Proof of Stake. The first is that when compared to the Bitcoin network, Proof of Stake is a newcomer and not as established or tested. Second, there are questions as to the equitability of Proof of Stake. Since it is purely token-based, most of the token supply tends to consolidate with a few wealthy token holders on a few wealthy Validators. This potentially increases the likelihood of collusion and network manipulation. Proof of Work is seen as more equitable since any network computer could mine Bitcoin, without requiring holding any Bitcoin, to begin with, however, the network has become so large that a single computer or even a small data center of computers is unlikely to be able to mine Bitcoin consistently, so it too has tended towards centralization among the large “miners” who are able to afford the expensive equipment needed.

Consortium

Consortium consensus has been favored among enterprises, like IBM and Walmart, attempting to experiment with Blockchains. Consortium consensus grants privileges to known members to participate in consensus but is a closed network to the public. Similar to our Walmart example later (see Trade section below), who determines the identity of the participants? Who decides exchange rates or data control? What happens if one of the participants is compromised? Participants in these networks must have a level of trust with each other outside of the network itself, severely limiting their use cases outside of very specific examples.

Proof of Authority

Proof of Authority, or social consensus, is an alternative consensus mechanism that doesn’t require the computing power of Proof of Work or the expense of holding tokens for Proof of Stake. Proof of Authority grants members the ability to elect (or demote) other users on the network through voting. It is similar to Consortium consensus in the sense that participants must build some sort of trust with other participants to gain access, but there are controls in place to remove underperforming or malicious participants. The trust is distributed across the participants, enabling this network to be public and open. The major flaw in Proof of Authority is the general politicking associated with a decentralized group of participants attempting to make decisions regarding changes to the network or admitting new members. The network Factom, one of the earliest decentralized networks to launch after Bitcoin, has faced these challenges, crippling the network to the extent that it is shifting to a Proof of Stake network later this year (2022).

Recommendations

1. Peer-reviewed study of energy usage and growth of the Bitcoin network evaluated. Calculations and predictions thus far have been incorrect.

2. Policies evaluated and enacted to promote innovation and development utilizing open and decentralized network protocols.

Use Cases

Finance

Decentralized finance, or DeFi, is the most clear-cut of the use cases for Decentralized Computing today. We won’t focus on this too much as it is written about elsewhere by more qualified individuals but suffice to say, as noted above, DeFi enables a democratization of finance, enabling users to engage in a new world of value creation and exchange. Needless to say, this area of disruption is ripe with optimism as well as fraud and until more specific regulation is applied in the United States, this “wild-west of digital gold” will continue.

Trade

Supply Chain is often the immediate answer when someone asks, “What else can blockchain do other than cryptocurrencies?” This is due to the above-noted misunderstanding that “blockchain” is just an immutable database. Often cited is Walmart utilizing blockchain to track its Mango and Pork supply chain (in a few specific geographic areas). While indeed a noble cause, having better traceability in a supply chain to help prevent disease breakout and sustainably source goods – all of this could have been done (and probably should have already been done for most of our food supply chain) with traditional Web and digital technologies. The speed at which Walmart was able to audit their supply chain also has little to do with blockchain but again, the digitization of that supply chain. Walmart’s “blockchain” technology is just a supply chain software they control and force their suppliers to record to. There is no real consensus mechanism, there is no real value creation or exchange on the platform, and it is a closed architecture, meaning only federated users can participate. If Walmart changed data on the platform, who would know?

We actually do believe that Trade and Supply Chain is a powerful use case for Decentralized Computing, but not for the reasons Walmart is using it. Trade is often a complex operation, requiring stakeholders across the spectrum from Governments to Suppliers. Significant value is generated as goods move through a supply chain. This value is transacted across many distributed parties and stakeholders. As supply chains become more digital, a significant need is for data standardization across the supply chain. This is easy to do in a simple supply chain within the borders of a country. However, in a globalized world goods transit around the globe through many different countries, each with their own regulations and trust between each other. Decentralized computing offers a unique opportunity for countries to agree on a common data standard, enable transparency across supply chains, and facilitate a more efficient flow of goods across borders without requiring outside trust of the members.

Goods flowing across borders encounter a wide range of regulations, paperwork, taxes, and inspections. An open protocol would enable the standardization of data across borders while enabling participating countries to retain their own individual regulations, digital paperwork, and tax revenue. This technology could be combined into a single system where data, goods, and money flow simultaneously as the product transverses through its supply chain. This would greatly reduce trade barriers and overhead while providing increased transparency of the supply chain and ensuring compliance

Internal trade is also a potential use case. In many industries, particularly healthcare, supply chains are managed by third parties who introduce unnecessary overhead and complex contract agreements between manufacturers and end-users, like hospitals. This also creates a large information barrier where manufacturers are unaware of the long-term quality of their product, how often it is used, and have only delayed demand information, making it difficult to build a dynamic supply chain. In many cases, manufacturers are forced to purchase old data from purchasing agencies like GPOs just to better understand their own equipment. The reverse is true as well. Hospitals are locked in complex contract agreements with GPOs, providing very little price transparency or resiliency. A trusted, decentralized network of manufacturers and end-users interacting directly with each other would introduce additional transparency and information flow between stakeholders, creating a more resilient and cost-effective supply chain. Recent regulatory changes, including the 21st Century Cures Act Final Ruling, created a data standard for health information that could be leveraged to provide even better quality assessment and demand data to manufacturers.

Defense

The US Department of Defense (DoD) is an early adopter of “blockchain” for supply chain. There are some good reasons for this, unlike Walmart. First, traceability is essential for many parts and products in the DoD supply chain. Certain parts are incredibly valuable and could mean life or death, or even impact national security, and, as such, a heavy-handed immutable database tracking these parts can find use in such an environment. For example, the highly successful SUBSAFE program already uses an amended-only paper trail for ship-safety repairs on submarines. The use of an immutable database in this instance could dramatically improve a paper-heavy and cumbersome process while still preserving ship safety.  Again, these use cases are limited in nature and don’t really address a key problem in vital supply chains, namely data entry (even in our SUBSAFE example). Non-natively digital information or value whether a collectible, Rolex, or Seawater valve, when entered onto a blockchain will always depend to an extent on the person entering the information. Blockchain, although immutable, can still be error-prone (then save that error for eternity). This again highlights the fact that much of our supply chain issues are a digitization issue, not an audit trail issue.

However, there are ways we can work to reduce the chance of error and leverage emerging digitization technology to better ensure proper data entry. In a current project with the United States Navy, we’re building a blood product tracking tool for sharing blood product information across a variety of parties such as a blood bank, hospital, and ship, we’ve utilized barcodes and RFID to automate data entry and partially solve this problem as well as integrating another key use case, IoT. As the DoD continues to experiment and test Decentralized Computing, we believe two more key use cases will emerge:

1. Digital Chain of Custody: As DoD continues its digitization efforts, much of the data created will be (and already is) of the highest importance to national security, from directives, to planning, to weapons system software. Securing this data in a way to ensure it has not been tampered with is a key area of national security importance. Especially in the case of mission-critical software, which is already natively digital, Decentralized Computing can be a powerful tool to prevent malicious or even unintentional errors.

2. Decentralized and Secure Communications: Warfare is becoming increasingly informationized and cyber-based. Conflicts arising in the coming years will be a battle of systems and systems confrontation. Digital information flowing from soldiers on the ground, drones in the air, weather, intelligence, and more will be fed into a full combat, comms, and sensor integrated network from where topline decision-makers will be able to view the entire battlefield and make key decisions. Disrupting these networks will require coordinated attacks on key nodes to disrupt or destroy the system. Traditional digital and web architecture creates single points of failure that if exploited would destroy military systems and impact national security. Decentralized Computing could dramatically improve the robustness of these systems. By creating a decentralized network of nodes, information transfer can not only be increased in volume, by utilizing technologies like BitTorrent, increase security by breaking the information apart into smaller chunks, and creating a robust data transfer network where one node’s destruction will have little impact on the other nodes or operation of the system. A sophisticated consensus mechanism will also be able to observe potential bad actors attempting to participate in consensus, removing Man in the Middle or Sybil attacks, something much harder to do in traditional cyber security.

Internet of Things (IoT)

Every day a new device becomes IoT enabled, from your microwave to your grill. This certainly will continue to grow and these devices will gather more and more information from our everyday lives and beyond into industry. As IoT becomes more ubiquitous, the control the IoT devices have over our lives and the data generated will be very important, valuable, and vulnerable. Significant debate will continue to occur around who owns IoT data and what it can be used for. Possibly even more important is what the device can do. If it’s just monitoring your grill temperature, you may be less concerned about privacy or data ownership. If it’s a heart monitor, baby monitor, or a control actuator on an oil pipeline – that device’s connectivity presents a very clear vulnerability that if exploited or the device itself fails, could result in serious consequences.

In a data privacy use case, a decentralized IoT device, not dependent on a third party’s centralized server, the user can be assured that information generated by a device is owned by that person alone. Additionally, compromising a large amount of decentralized IoT devices, while in a centralized use case would only require the exploitation of a single server, would require exploitation of each IoT device, something extremely difficult, if not impossible to do. 

Centralized IoT devices are also limited by the companies that produce them, interoperable with only others in their product line, and dependent on the longevity of their maker to maintain functionality. What happens to a valve actuator on a pipeline when the maker of that connected device goes bankrupt, is acquired, or the cloud server supporting that device goes down? The IoT centralized ecosystem as it exists today is highly vulnerable to the viability of the companies making the device. Potentially fine for a light bulb or grill, but disastrous if a pipeline goes down or your pacemaker needs an update.

IoT devices being served by a decentralized network can avoid the issues that central servers create. Even in the case that a company ceases operation, the network itself will remain functional, allowing the device to continue operations. The data standard of the decentralized network, combined with current efforts today to link data across different networks also presents an opportunity for increased IoT device interoperability. Devices will be able to speak to each other across product lines, creating new use cases. Your grill could talk to your doorbell, letting you know friends have arrived for the barbecue. In a more serious use case, weather monitoring could be connected to electrical grids and oil pipelines, providing information directly to control systems to enable proactive protection of equipment.

MachineFi is the terminology being introduced to describe the intersection of Industrial IoT and Finance. The open protocol, IoTeX, is a leader in the space and describes MachineFi as a new paradigm fueled by Web3 that underpins the new machine economy, whereby machine resources and intelligence can be financialized to deliver value and ownership to the people, not centralized corporations. Though somewhat hyperbolic, the message is clear, IoT devices powered by Decentralized Computing have the potential to not only impact the security and longevity of the device but the empowerment of the users themselves.

Energy

Energy markets are increasingly distributed as ‘micropower’ sources such as wind turbines and solar panels proliferate. Battery backups to these renewable energy sources are also in high demand and are often traded not as storage but as ancillary power. The variability of renewable sources is difficult to predict, so much so that it often trades at a negative price. Current energy trading was designed around the large power sources of the past and is not dynamic enough to predict or react to changes in weather or energy production. Smaller power operators have an opportunity to more directly sell their power to the grid as digital technologies now manage the flow of energy. Additionally, grids could also have more flexibility to directly compensate for excess energy storage and backups, a must-have for extreme weather events and surge times, ensuring power is available when needed. As a whole, energy markets could be another prime area where Decentralized Computing could have a large impact utilizing a combination of the use cases discussed above. Cross border energy trade, enabled by a common data standard on an open network, robust grid and weather monitoring through widespread IoT devices, and MachineFi tools to power dynamic energy trading and real-time settlement could usher in the next generation of reliable and secure clean energy production.

A thought on Uncensorability

A clear but often overlooked value proposition of Decentralized Computing is uncensorability. This means that transactions on a public network, like Bitcoin, cannot be stopped by any third party. This is a particularly powerful empowerment technology that could enable victims of oppressive governments or those unable to access banking to be able to participate in an economy otherwise closed to them. But uncensorability has its dark side – horrible things could be empowered by this tech – Child pornography or payments to terrorists could all be funded and transmitted through these networks. As public open decentralized and uncensorable networks grow, special attention and regulation are needed to ensure that the worst of humanity is not enabled along with the best.

Conclusion

Let’s cut through the hype – tokens like Bitcoin and even Dogecoin and Bored Ape NFTs have some value, what exactly is that value? As our lives have become more digital, as our machines and data have become more digital, as software has eaten the world; we must understand that the value associated with our lives, our machines, and our data is also becoming digital. There is something here beyond mere speculation. This should be seen as a natural progression, money flowing into the digital value ecosystem in search of what’s next. The question before us is whether to allow the natural progression of value digitization by enabling the building of this new economy on free and open standards or to attempt to heavily regulate it to prevent what is seen as excess in the world of digital tokens (like Dogecoin). A few things are clear though, the robustness and uncensorability of the open protocols make it very difficult to shut down and the alternatives, private and consortium networks have been unable to produce a product that can match the potential of the free and open digital value economy.

Blockchain Learning, Proof of Stake

Celo’s Approach to Decentralization

By Connor Smith

Note: This is the final installment of a series exploring different approaches that blockchain networks have taken to achieve decentralization. Part 1 introduced decentralization and the inter-play it has with certain aspects of crypto networks like governance, incentives, and network architecture. If you missed that article I highly recommend going back and reading it here. The remaining articles have been examinations of the decentralization on Bitcoin, Factom, Cosmos, Terra, and Polkadot/Kusama. If you missed those and would like to go back and read them before we dive into Celo, you may do so, here, here, here, here, & here respectively.

Hey everyone, thank you for joining me again as our thoroughfare through varying approaches to decentralization comes to a conclusion. From the first crypto network, Bitcoin, to the new generation of ‘meta-networks’ like Polkadot and Cosmos, we have seen quite a few different ways networks have attempted to decentralize and how that has influenced their design. We have seen how factors like application design criteria and consensus architecture (i.e proof-of-work vs. proof-of-stake) influence the decentralization of a networks’ resources and participants. Moreover, by taking a chronological approach in the networks examined throughout this series, we have seen the evolution of the crypto industry over the better part of a decade and will be ending with the youngest protocol we have seen thus far, Celo. Aiming to overcome price-volatility and ease of use problems associated with many cryptocurrencies, Celo seeks to bring financial infrastructure and tools to anyone with a smartphone. 

To reach this goal, Celo is taking a full-stack approach and introducing innovations at the networking and application layers with technologies like a lightweight address-based encryption scheme, an ecosystem of stable-value tokens, and novel system architectures. The project is backed by some of the most prolific crypto investment partners and Silicon Valley tech titans like a16z, Polychain Capital, Coinbase, Reid Hoffman, and Jack Dorsey to name a few. The protocol has been making waves over the last few months due to its rigorous incentivized testnet competition, known as the Great Celo Stake Off, the rigor of which was recounted two weeks ago by our CEO Nate Miller. The Stake Off is entering into its third and final phase and the mainnet is slated to launch later this year, making 2020 a big year for the protocol. So without further ado, let’s dive into Celo!

So What is Celo and How Does it Work?

Celo was launched with the mission of building a financial system capable of bringing conditions of prosperity to everyone with a smartphone. To create such an environment, Celo has proposed a theory of change to satisfy the following three verticals: satisfying people’s basic needs like access to food & healthcare, the ability to enable an individual’s growth potential, and increasing people’s social support for one another. All aspects of the protocol, from its architectural decisions, to development efforts, and even all technical and community projects support activities tied to enabling these three conditions and ensuring such a system is created. Work on the project began in the summer of 2018 when entrepreneurs turned GoDaddy executives Rene Reinsberg and Marek Olszewski raised their initial seed round of $6.5 MM from some of the Valley’s top Venture Capitalists. The pair had exited their prior company, Locu, to GoDaddy in 2014 for $70 MM, and had since been serving as vice presidents in the restaurant and small business division of the firm. Armed with little more than a white paper at the time, the team got to work and in less than a year the first public testnet was released. Celo aims to achieve its mission of bringing conditions of prosperity to everyone and being a mobile-only payments platform through the following two features: mapping users public phone numbers to an alphanumeric string (public key) needed to transact on the network, and using a network of stablecoins pegged to a basket of different cryptocurrencies to minimize price volatility.

The team believes that creating a price-stable, more user-friendly transaction experience is the only way that a cryptocurrency payments solution will see success, and thus has sought to redefine the entire blockchain networking stack to optimize for these characteristics. Hence, Celo is a mobile-first solution operating as a proof-of-stake smart contract platform based on Ethereum and composed of the following three components: lightweight identity for a better user experience, stability mechanism for stable-value currencies, and systems for incentives and governance to ensure platform stability. An image of the Celo stack may be viewed below.

Celo’s lightweight identity system utilizes a variant of identity-based encryption known as address-based encryption to overcome traditional user experience issues associated with transacting cryptocurrencies. Instead of the canonical having to download a wallet, generate a public/private key pair, and provide whoever is sending you crypto with a long hexadecimal address, Celo’s address-based encryption ties a user’s phone number directly to a Celo wallet. This allows the phone number to be used when a payment is initiated instead of the actual Celo address when a payment is initiated, simplifying the payment process. Additionally, only a cryptographic hash of the phone number is stored on the blockchain to preserve privacy. Celo also allows a user to link multiple phone numbers to his or her wallet address to protect against losing a phone or changing numbers. Celo also utilizes a social reputation mapping algorithm on top of this infrastructure known as EigenTrust. While the technical underpinnings of the algorithm are fairly complex, it functions similarly to Google’s Page Rank algorithm but is designed for decentralized systems. In short, this algorithm defines a given phone number’s relative reputation score based on the number of phone numbers who are connected with and trust that phone number, coupled with the weighted reputation of those connections.

Similar to Terra’s approach to creating a stablecoin network, Celo is also a network of stablecoins pegged to real-world fiat currencies and uses a seigniorage based approach to maintain stability. For the sake of brevity, I am going to gloss over what stablecoins and seigniorage are, as I discussed them at length in my post on Terra, and instead, dive into how they work in the context of Celo. Celo is designed to support an ecosystem of pegged stable currencies alongside the native token of the protocol, Celo Gold (cGold). The first, and currently only, stablecoin on the network is the Celo Dollar (cUSD), which is pegged to the price of the U.S dollar. cGold has a fixed supply and is held in a reserve contract where it is used to expand or contract the supply of cUSD to maintain a stable price through seigniorage. The network relies on a number of external oracles to provide feeds of the cGold price in USD and then allows users to exchange a dollars worth of cGold for cUSD and vice versa. When the market price rises above the $1 peg, arbitrageurs may profit by purchasing a dollar’s worth of cGold and then selling cUSD for market price. Conversely, if the price of cUSD falls under the peg, arbitrageurs can profit by purchasing a cUSD for market price and exchanging it with the protocol for a dollar’s worth of cGold and then sell the cGold on the market. Thus, network participants are able to profit in nearly any market condition. A more thorough examination of how Celo’s stability mechanism works may be found here.

Celo also goes a step further to ensure stability and has implemented a constant-product market-maker model to prevent against the cGold reserve from becoming overly depleted when the price of cGOld supplied by the oracles does not match the market price. The mechanism dynamically adjusts the offered exchange rate in response to the exchange activity and will update a new constant-product market maker to trade cGold and cUSD whenever the oracle price of cGold is updated. Hence, if the oracle price is correct, the exchange rate determined by the constant-product market-maker will be equivalent to that of the market and no arbitrage opportunity will exist. However, if the oracle price data is incorrect, the rates will differ and an arbitrage opportunity will exist until exploited enough by users to dynamically adjust the quoted exchange rate and erase the opportunity. 

Celo’s Network Architecture

Image adapted from celo-org medium here

Consistent with their full-stack approach to creating a mobile-first financial system,  Celo implements a novel tiered network architecture to optimize the end-user experience and maximizing physical decentralization. Similar to other Byzantine Fault Tolerant (BFT) proof-of-stake networks we have seen so far, like Cosmos, the core network participants responsible for producing blocks and verifying transactions is the validator. Unlike other proof-of-stake networks that encourage anyone who is willing to run a validator, Celo encourages only professional node operators to run a validator on the network. For example, Celo strongly encourages running validators in a secure data center and has been auditing validators participating in the Stake Off to see if this is the case. In maintaining a secure set of globally distributed validators, the network hopes to maximize security, performance, and stability. Celo has also attempted to implement safeguards against any single validator or organization from gathering a disproportionate amount of the network’s resources by introducing validator groups. Instead of electing individual validators to participate in consensus, validator groups comprised of a collection of individual validators are elected and then internally compete to solve blocks. The actual election process and underlying mechanisms are far more involved and complex, so if you are interested in learning more, as I said earlier, check out this blog post from our CEO, Nate Miller, which explains the process in more detail. Validator groups have their own unique identity and a fixed size to make it difficult for a single organization to manage multiple groups and consolidate disproportionate influence, thus improving decentralization.

While the ability to run a validator is fairly restricted to only professional node operators, there are two other tiers of nodes that anyone can run on the Celo Network: a full node and a light client. The Celo application/wallet has a light client embedded within it that is optimized for mobile devices, so anyone running the software on their phone is running the light client. The requests exchanged across these light clients (i.e sending & receiving transactions on the network) must be processed by full nodes, which receive a transaction fee facilitating the transaction. People running full nodes can set a minimum service fee for processing transactions from light clients and refuse to perform service if the fee they collect will be insufficient. The eventual goal of the protocol is to have these components operate such that light clients will automatically choose full nodes to peer with based on cost, latency, and reliability. However, much fundamental network infrastructure must be laid down first before this is achievable. An eventual flow of what this will look like, including validators, may be viewed below.

Image adapted from celo-org medium here

So How Does Governance Work on Celo?

At a high level, Celo has a governance model similar to many other proof-of-stake networks, where the respective weight of a particular user’s vote in the governance process is proportional to the amount of cGold they have staked and the duration of their stake. Similarly, Celo also supports on-chain governance to manage and upgrade all aspects of the protocol including upgrading smart contracts, adding new currencies, or modifying the reserve target asset allocation. Changes are currently made through a governance specific smart contract that acts as the overseer for making modifications to other smart contracts throughout the network. The eventual goal of the protocol is to transition from this smart contract structure for on-chain governance to a Distributed Autonomous Organization, or DAO, owned and managed by cGold holders. This could function in a form similar to how MakerDAO operates, however, it is far too early to speculate on how the Celo DAO would actually function. For more information on what a DAO is or how they work, click here.

Any network participant is eligible to submit a proposal to the governance smart contract, so long as he or she is willing to lock a portion of his or her cGold along with it. A proposal consists of a timestamp and the information needed to execute the operation code should it be accepted by the network. Submitted proposals are then added to a proposal queue for a duration of up to one week where they will be voted upon by cGold holders in hopes of passing to the next approval phase. Every cGold holder with a locked cGold account may vote for one proposal per day. The top three proposals for each day then advance to the approval phase, the original proposers may reclaim their locked gold commitment, and they then enter the referendum phase where they are voted upon by users. Any user may vote ‘yes’, ‘no’, or ‘abstain’ to the proposal and the weight of their vote is tied to their locked gold commitment. While yet to be implemented, Celo also intends on incorporating an adaptive quorum biasing component like we observed in Polkadot to accurately account for voter participation. 

So How Decentralized is Celo?

Image adapted from celo medium account here

As I mentioned earlier, Celo has yet to launch their mainnet so this discussion will be framed through the context of what has transpired throughout their ongoing incentivized testnet, Baklava. As of the time of writing this article, there are currently 75 different validator operations participating in the third phase of the Great Stake Off, but 152 Validators in total and 88 validator groups. Moreover, Celo is debating expanding the active set of validators on the network upon the conclusion of the Stake Off. The active set is currently set at 100 Validators, and the original plan was to wind down the number of possible validator slots to 80 before mainnet launch. However, Celo recently announced that they now plan to expand the active set to 120 so long as scale testing shows this permissible given the active engagement that validators have shown throughout the Stake Off. Considering that Celo intends on only allowing Validator nodes to be run by professional service providers, this is a major step in decentralizing their network and ensuring a globally dispersed, resilient network.

When examining the allocation of resources across the Celo network, there is somewhat of a disparity between the highest-ranked participants and those at the bottom. For example, the top elected validator group has nearly 1.1 MM votes, whereas the lowest elected validator group has only slightly over 200K. Additionally, the top elected group has 5 different participants with the largest, whereas the bottom elected group only has one. This illustrates the importance of the validator group, not the individual validator on Celo. The largest cGold holder within the largest elected validator group only has 194k locked cGold, meaning that all members of the group have fewer locked cGold than the one participant in the bottom group. Yet, the group collectively is the highest voted group so its participants are more likely to participate in consensus and gather rewards. Metrics relating to the decentralization of full nodes and light clients on Celo are not readily available since the network is still in the very early development stages. Consequently, it is difficult to attempt to quantify the degree of decentralization of these layers of the network. The Celo Wallet Application is available for the Alfajores testnet on both the Apple App Store and the Google Play Store, with over 100 downloads for the latter. This suggests that there are at least 100+ light nodes on the non-incentivized testnet Alfajores. 

That’s all! I hope you have enjoyed this case study approach to decentralization as much as I have. With the last phase of the Baklava incentivized testnet coming to a close within the next few weeks, mainnet launch slated for later this year, and the protocol’s recent announcement of Celo Camp to incubate and assist startups building on the platform, it is certainly an exciting time to be involved with Celo. The Great Celo Stake Off has been no walk in the park, but it has certainly stress-tested the network technically and from an incentives standpoint. Excluding some economic barriers to entry for new validators attempting to enter the active set, it appears that Celo’s approach to decentralization has achieved its goal, at least physically. It will be interesting to see if this continues once true economic conditions are introduced on mainnet, but I am optimistic about the future of the network. If you are interested in seeing if Celo is the right blockchain for your application, running a Celo cluster, or how staking on Celo, contact one of our LedgerOps experts here. We have been involved with the protocol throughout the entire incentivized testnet and are currently in the second-highest voted group (Chainflow-Validator-Group), so we are highly familiar with the protocol. Thanks again and take care!

Blockchain Learning, IoT, Proof of Stake

IoT: The Problem Blockchain has Been Looking For?

By Connor R. Smith, originally published March 22, 2019

Despite blockchain having existed for over a decade now, few definitive uses have been proven outside of digital currencies. There have been many experiments to apply these technologies in areas like supply chain, healthcare, real estate, and even tipping people for their tweets or for watching online ads, but, there has yet to be one vertical that has been radically transformed from them. Many feel that this is because the technology is not mature enough yet or a general lack of understanding. Certainly, a lackluster user experience and insufficient education play a part, but others have started to argue that blockchain is a solution searching for a problem that may or may not even exist. It seems like new articles surface weekly about startups raising millions of dollars promising to solve some largely nebulous problem using “Blockchain + IoT or, AI,  or Drones, or all the above…”. At Consensus Networks, we’re focused on finding and supporting protocols that are technically sound and addressing real-world use cases. One area we have been particularly excited about lately is the ability of blockchain to secure internet of things (IoT) data.

In 2018, there were over 17 Billion connected devices around the world, 7 Billion of which were IoT enabled. These numbers are projected to double or triple over the next 6 years. IoT devices communicate with one another by gathering and exchanging data through sensors embedded in the device, enabling greater automation and efficiency. Devices that are seemingly unrelated can communicate with one another, driving the convergence of many verticals ranging from smart homes, cities, and cars to medical and industrial IoT. For example, IoT-enabled machines in a manufacturing plant could communicate information regarding system health and other mechanical data via a centralized platform. Plant operators could then take corrective action before a malfunction occurs, easily conduct more preventative and informed maintenance, and more accurately predict production rates. In fact, studies have found that over 90% of tech, media, & telecommunications executives feel that IoT is critical to nearly all of their business units and will drive the greatest business transformation over the next 3 years. 

Now you’re probably thinking, “Okay so if IoT can fix all of the world’s problems why do we need to add blockchain too?”. IoT may be a powerful emerging force, but it has some critical flaws. While IoT devices are great for communicating streams of data and supporting real-time device monitoring, they often have extremely poor endpoint security. For example, in 2017 the FDA had to recall over 500,000 internet enabled pacemakers after finding vulnerabilities that allowed hackers to gain control of the device. Beyond healthcare, IoT data privacy and security issues are an even greater concern when considering the connected future of autonomous vehicles, homes and smart cities. Another shortcoming of current IoT networks lies in their scalability. Conventional IoT network architectures are centralized with the network of devices sending data into the cloud, where it is processed and sent back to the devices. Considering the projected deluge of IoT devices projected to enter into the market, scaling this infrastructure will be highly difficult and expose  vulnerabilities to hackers to compromise the network and access your data.

Fortunately, integrating blockchain technology with IoT networks provides a path forward to overcome the scalability, privacy, and security issues facing IoT today and accelerate the adoption of both technologies. As opposed to having a centralized system with a single point of failure, a distributed system of devices could communicate in a trusted, peer-to-peer manner using blockchain technology. Structuring the network in this manner means that it would have no single point of failure, so even if a device was compromised the remaining nodes would maintain operable. Moreover, smart contracts could be integrated with the network to enable IoT devices to function securely and autonomously without the need for third party oversight. Consequently, blockchain-enabled IoT networks could exhibit greater scalability, security, and autonomy simply by modifying their current network architecture and implementing a more decentralized approach. 

However, perhaps the most important benefit blockchain provides IoT networks comes from its cryptographic security. Sharing data across a cryptographically secured network makes it far less susceptible to hackers, by helping to obfuscate where data is flowing, what is being exchanged, or what devices are transacting on the network. Whereas security in modern IoT networks was added as an afterthought, encryption and cryptographic keys are a core component of blockchain technology. Moreover, some networks are beginning to incorporate zero-knowledge proofs, which means that network security for IoT devices could be bolstered even further. 

Image adapted from Lukas Shor here

The underlying mathematics and mechanics of zero-knowledge proofs are highly complex, but essentially allow two users to prove that a piece of information is true without revealing what the information is or how they know it to be true. In the context of  IoT devices, this means that a network of IoT devices could share data in total anonymity and with complete privacy. No information regarding the transaction would be revealed other than proofs verifying that the network knows it is legitimate. Thus, the network maintains complete functionality while preserving maximum security. Regardless of if a blockchain-enabled network of IoT devices utilized zero-knowledge proofs or not, simply utilizing a shared, encrypted ledger of agreed upon data can provide many security benefits in IoT networks.

IoTeX Logo

While there have been several projects that have attempted to tackle IoT and blockchain, one that we are excited to support is IoTeX. Founded by a team of cryptography and computer science experts in 2017, IoTeX is a privacy and security centric blockchain protocol that aims to create a decentralized network designed specifically for IoT devices. IoTeX uses a network architecture consisting of blockchains within blockchains, where a root chain manages many different subchains. Designing the network in this manner allows IoT devices that share an environment or function to do so with increased privacy, with no risk to the root chain if this subchain is compromised. 

Aside from enhanced privacy and security, this design allows for greater scalability and interoperability as subchains can transact with the root chain directly or across the root chain to other subchains. IoT devices on the IoTeX network are also able to transfer data with one another in total privacy through the incorporation of lightweight stealth addresses, constant ring signatures, and bulletproofs. IoTeX also incorporates a Randomized Delegated Proof of Stake (RDPoS) mechanism for achieving consensus that they refer to as Roll-DPoS. Using this mechanism, nodes on the IoTeX network can arrive at consensus much faster with instant finality and low compute cost, making it much more friendly to IoT devices. Moreover, the IoTeX team recently released their first hardware product that leverages their blockchain network, Ucam. Ucam is a home security camera that writes data it records directly to the IoTeX blockchain, preventing it from being accessed by device manufacturers or sold to third parties like Google or Amazon. Ucam guarantees absolute privacy and provides users with secure blockchain identities which they can use to control their data.

Image adapted from Venture Beats here

Thanks for reading! More articles to come regarding use cases for IoT and Blockchain and what the marriage of these two technologies might look like for Web 3.0 and Industry 4.0. Let us know what you think and find us on twitter or discord if there are any questions or areas you’d like us to explore! If you’re interested in finding out more about IoTeX, Ucam, or how blockchain can improve your IoT solution, feel free to contact one of our LedgerOps experts here. We have supported IoTeX for nearly a year now, and have been running a delegate node on their mainnet since genesis. Needless to say, we are highly familiar with the protocol and eager to see if IoTeX or any of our other blockchain network services are a good fit for your IoT application!

Blockchain Learning, Proof of Stake

Kusama & Polkadot’s Approach to Decentralization

By Connor Smith

Note: This is the sixth installment of a series detailing different approaches that blockchain networks have taken to decentralize their network. Part 1 introduced the concept of decentralization and the inter-play it has with certain aspects of crypto networks like governance, incentives, and network architecture. If you missed that article I highly recommend going back and reading it here. The subsequent articles have examined the decentralization of Bitcoin, Factom, Cosmos, & Terra. If you missed those and would like to go back and read them before we dive into Kusama & Polkadot, you may do so, here, here, here, & here respectively.

Hey everybody, and thank you for joining me again as we continue our examination of the varying approaches to decentralization throughout the crypto ecosystem. Two weeks ago we examined Cosmos’ approach to decentralization and the unique challenges the protocol faces in its quest to build the internet of blockchains, and then spent last week studying one of the blockchains building on its platform, Terra. This week we will examine Polkadot, another protocol attempting to build a network of interconnected blockchains. Yet, it is taking a radically different approach from what we observed with Cosmos. However, as Polkadot has not yet launched its mainnet, much of the discussion will be framed through the lens of what has ensued on its ‘Canary Network’, Kusama. If you are wondering what a canary network is or why we will be examining two protocols in this article, don’t worry, I promise I will address these concerns in due time. However, to better discern how Polkadot differs from Cosmos and its approach to decentralization, it is crucial to first understand the history of the protocol and the institutions leading its development.

So What is Polkadot and Where Does Kusama Come In?

The Polkadot whitepaper was released in 2016 by Dr. Gavin Wood, co-founder of Ethereum and  of blockchain development powerhouse Parity Technologies. Designed to connect all types of blockchains (private, consortium, public, & permissionless) and technologies, Polkadot aims to serve as the backbone of an ecosystem for independent blockchains to seamlessly transact between one another in a trustless manner and enable the decentralized internet at scale. Gavin and Parity Technologies are veterans of the crypto industry and have been instrumental in the evolution of blockchain technology. Many people tend to associate fellow Ethereum co-founder and figurehead of the protocol, Vitalik Buterin, as the leader of the blockchain revolution and bringing stateful programming to blockchains for applications outside of digital money. These assertions are well justified, seeing as he authored the Ethereum whitepaper, and has led the direction of the protocol’s development since its inception. However, many of Ethereum’s core components and widely used technologies are the work of Gavin and Parity. For example, during Gavin’s time with Etherum, he created the Solidity smart contract language that powers all of the dApps running on the network and was responsible for the network’s first functional release in 2014. Parity Technologies are also the lead developers and, until recently, maintainers of the popular Parity Ethereum Client.  The Parity Etherum client is an ultra-efficient alternative to the popular geth node run by many Ethereum developers that powers an estimated 20% of Ethereum nodes and portions of Infura, the open node cluster used by many developers that processes 13 Billion transactions a day.

Needless to say, Gavin & Parity, have been instrumental in shaping the decentralized web and blockchain, thus far. Many of the protocols that followed Ethereum have attempted to build upon or adapt its concepts in some way borrowing from the innovations that these two produced. However, throughout all of the work Gavin & Parity performed for Ethereum, they began to notice that most approaches to blockchain networks were not practical in terms of scalability or extensibility as a result of inefficiently designed consensus architectures. Hence, Polkadot was proposed as a heterogeneous multi-chain framework that would allow many different blockchains, irrespective of consensus mechanism, to be interoperable with one another and overcome the following five shortcomings of conventional crypto networks: scalability, isolatability, developability, governance, & applicability. If you are curious as to how Polkadot views them, check out their whitepaper here. Similar to Cosmos, Polkadot’s heterogeneous multi-chain architecture is hyper-focused on addressing the scalability and isolatability problems, believing that if these two are adequately addressed the subsequent ones will reap tangible benefits and see improvement as well.

Shortly thereafter, Gavin, in conjunction with Robert Hermeir & Peter Czaban of the Web3 foundation, officially founded Polkadot and commenced R&D on the ambitious effort. The Web3 foundation is a Swiss based organization founded with the intent of supporting and nurturing a user-friendly decentralized web where users own their own data and can exchange it without relying on centralized entities. The foundation conducts research on decentralized web technologies and supports different projects building them, Polkadot being the first. In May of 2018 the initial proof-of-concept for Polkadot was released as a testnet and three subsequent iterations that integrated additional features were released in less than a year. Testnets provide an excellent proving ground for networks to work out any technical bugs that could occur at scale before a mainnet launch. 

Starting with Cosmos’s Game of Stakes, the idea of using incentivized testnets to entice developers to truly stress test a network before mainnet launch has become largely canonical as the step preceding the launch of any proof-of-stake network. Polkadot took this a step further and released Kusama, an early,  unaudited release of Polkadot, that serves as the experimental proving ground for the network. Affectionately referred to as ‘Polkadot’s Wild Cousin’, Kusama is Polkadot’s ‘canary’ network, or a highly experimental reflection of what the production version of Polkadot will be like. Kusama allows developers to test governance, staking, and more in an authentic environment with real economic conditions. Thus, developers and those validating on the network can be adequately forewarned of any potential issues that may transpire on Polkadot and correct them before a mainnet deployment. Kusama differs from a traditional testnet in that it is an entirely separate network from Polkadot, with its own token (KSM), and is run by the community. It will exist in perpetuity so long as the community supports it and is not inherently tied to Polkadot aside from inheriting its design and functionality.

So How Do These Heterogeneous Multi-Chain Networks Work?

There are three fundamental components that comprise the architecture of the Polkadot ecosystem: the relay chain, parachians, & bridges. For those of you who have been following along in this series, each of these pieces is largely analogous to the hub, zone, & pegzone concepts described in my Decentralization of Cosmos article. Parachains, or parallelizable chains, are the individual, customized blockchains built on top of Polkadot that gather and process transactions within their own network. All computations performed on the parachain are independent from the rest of the polkadot ecosystem. Thus, parachains can implement data storage and transaction operations in a manner most befitting to the problem they are trying to solve without being tethered to the technical underpinnings of another protocol like its scripting language or virtual machine. Parachains are then connected to the relay chain, i.e Polkadot, which coordinates consensus and relays transactions of any data type between all of the chains on the network. Lastly, bridge chains are a specialized permutation of a parachain that link to protocols with their own consensus like Ethereum or Bitcoin and communicate with them without being secured by the Polkadot relay chain. An image of how these pieces all fit together may be viewed below:

Image adapted from https://polkadot.network/technology. Pink denotes the relay chain, orange the parachains, and blue a bridge chain

Designing the network in this manner has several key benefits, namely high security and near infinite scalability. Polkadot pools all of the security from the relay chain and the parachains building on top of it, irrespective of consensus mechanism, and then shares that aggregated security across the entire network. The relay chain provides a ground source of truth for the network by handling transactions and arriving at consensus, but any computation performed on the network can be scaled out in parallel across the appropriate parachains. Moreover, parachains can be attached to other parachains to create highly distributed networks for processing transactions. This allows the transaction volume of the network to be scaled out immensely without placing a crippling burden on the relay chain itself and allowing it to maintain the same level of security. Each parachain can maintain its own notion of validity for the transactions it processes, seamlessly disseminate that information to other parachains via the relay chain, and then the network as a whole can arrive at consensus.

However, this is only feasible with participation from the following core network stakeholders: validators, collators, and nominators. Similar to other proof-of-stake networks, validators are the node operators responsible for verifying transactions on the network and producing blocks for the Polkadot blockchain. Equivalently, nominators are those who elect validators on their behalf into the active set by staking with them in exchange for a portion of their block rewards. The new player in this ecosystem is the collator, who is responsible for consolidating the transactions on the respective parachain they monitor into blocks and proposing proofs of those blocks to the validators. This eases the technical burden on validators by allowing them to only have to verify potential blocks from parachains as opposed to processing and verifying thousands of parallel transactions. Hence, the relay chain can arrive at consensus in seconds as opposed to minutes and maintain the security offered by a highly decentralized network. Collators can also act as ‘fisherman’ who are rewarded for identifying parties on the network acting maliciously. An image depicting how all of these stakeholders interact across the different network layers may be viewed below:

It is important to note that I am simplifying significant portions of how Polkadot works at the technical level. The project is highly complex, with a myriad of intricate components at each layer that would take far too long to detail in a single article. For example, Polkadot uses a novel proof-of-stake consensus algorithm known as  GRANDPA (GHOST-based Recursive Ancestor Deriving Agreement) that separates block production from block finality, allowing blocks to be finalized almost immediately. For more on GRANDPA check out this article, and if you are interested in learning more about the underlying technology of the network, check out the whitepaper here.

Governance on Polkadot

Similar to other proof-of-stake networks, the crux of Polkadot’s governance is hinged on the idea of stake-weighted voting, where all proposed changes require a stake-weighted majority of DOTs (or KSM on Kusama) in order to be agreed upon. However, Polkadot also incorporates a tiered governance structure and unique voting mechanisms in an attempt to decentralize power and governing authority on the network. Anyone holding the protocol native currency, DOTs, has the ability to directly participate in governance on the network. They can do everything from vote on proposals brought forth by the community, nominate validators to participate in the network, prioritize which referenda are voted upon and more. Governance, itself, is completely dissociated from validating on the network aside from the fact that validators can use their DOTs to vote as described above.

Polkadot also has a Council that will range in size from 6-24 members and have prioritized voting rights. Anyone who is a DOT holder is eligible to run for council, and are elected by the community in hopes that they will propose referenda that are sensible and benefit the network as a whole. In addition to preferred voting rights, council members have the ability to veto incoming proposals if they believe they are harmful to the protocol. However, after a cool-down period, the proposal may be resubmitted and, if the council member who vetoed it originally is still present, he or she will be unable to do so again. To protect against council members becoming negligent in their duties or abusing their governing power, members are elected on a rolling basis, with the term of each council member being equal to the size of the council times two weeks. An illustration of this may be viewed below.

To combat the fact the total community participation for voting on any referendum is unlikely, Polkadot implements what is known as Adaptive Quorum Biasing to change the supermajority required for a proposal to pass based on the percentage of voter turnout. Consequently, when voter turnout is low a heavy supermajority of ‘aye’ votes is required for a referendum to pass or a heavy super majority of ‘nay’ votes is required to reject it. Yet, as voter turnout approaches 100% the system adapts and only a simple majority either way is required to account for the greater number of total votes. DOT holders votes are also weighed proportionally based on the amount of DOT they own and the amount of time they choose to lock those tokens for after the referendum has ended. For example, any DOT holder voting on a proposal must lock their DOTs for at least 4 weeks, but they can instead choose to lock it for up to 64 weeks to place a greater weight on their vote. All voting also occurs on-chain, so any approved proposals have a direct and immediate effect on how the network behaves.

So How Decentralized is Polkadot?

As mentioned earlier, Polkadot has yet to launch its mainnet so this discussion will be framed through the context of Kusama. As of the writing of this article, there are over 300 nodes supporting the Kusama network and 160 active validators distributed around the world. Moreover, there is currently a proposal to increase the active set from 160 to 180 validators up for election with significant support from the community, suggesting that the network will become even more decentralized in the near future. The Council has 13 members with a combined backing of over 1 MM KSM and 264 voters. Of the 8.380 MM KSM issued so far, 2.641 MM, or 31.51%,  of it is staked across the active set of validators. Similar to the other proof-of-stake networks we have observed so far, the top 10 Validators control a significant portion of the staked KSM on the network, albeit far less than that of networks like Cosmos and Terra. Of the 2.641 MM KSM staked on the network, only about 18% of it resides within the top 10 validators by amount staked. Especially when considering that governance is completely decoupled from validating on the network, this is all the more impressive. Of the total possible voting power on the network the KSM held by the top 10 validators by stake only amounts to only roughly 5% of the overall voting power.

Given the scope of Polkadot to not only serve as a network of heterogeneous multi-chains, but as a platform for connecting private to public blockchains and create a truly decentralized web, having a sufficiently decentralized network across in all aspects (architecturally, economically, and from a governance perspective) will be hyper-critical to its success. If how decentralization on Kusama has materialized is an adequate proxy, then the future looks exceedingly bright for Polkadot. However, it is difficult to tell how this level of decentralization will carry over from Kusama to Polkadot. Kusama was launched to test out the different technical parameters of Polkadot and simulate what a live environment would be like for those validating on Kusama. Consequently, it has been an exceedingly open community and encouraging of people to participate, which has likely led to the magnitude of decentralization observed on the network. Considering that 50% of genesis rewards from Polkadot have already been distributed via a token presale that occurred over two years ago, it is difficult to say with certainty that this level of decentralization will occur on Polkadot come mainnet launch. While many of those on Kusama have been involved with the project for a long time and intend on participating in Polkadot, the crypto world has evolved tremendously over the last two years. Therefore, there is some inherent possibility that the old money that entered two years ago has very different interests than the newcomers who have become involved since. However, the team behind the project is a force to be reckon with in the crypto world that has worked hard to make Polkadot a reality and the community grows more and more everyday, so I’m optimistic that its launch this year will exhibit decentralization in a manner more aligned with how Kusama has evolved.

That’s all for this week, I hope you enjoyed the article! I know we unpacked quite a bit of information here, but, as I said, Polkadot is one of the most technically advanced protocols making waves right now and I really just scratched its surface. If you’re interested in learning more about Polkadot or Kusama, seeing if its a right fit for your application, or want to get involved in staking, feel free to reach out to us at Consensus Networks! We are actively involved in the community, have run Validators for both Kusama and the current Polkadot testnet (Alexander) for some time, and are gearing up for Polkadot mainnet so we are highly familiar with the protocol. Contact one of our LedgerOps experts here with any questions you may have about the network and we will get back to you as soon as we can. We are excited for the future of Polkadot and the impact it could have on the decentralized web, and eager to help you access the network. Thanks again for reading and tune in next week as I conclude my examination of decentralization with Celo.

Blockchain Learning, Proof of Stake

An Inside Look at the Celo Kitchen

We’re about a month into the Great Celo Stakeoff and it is definitely the most dynamic testnet launch I’ve ever been apart of. I’m constantly checking vote totals and running election calculations. On Discord, groups are frequently asking for new validators to associate with and are chatting late into the night. And already, many of the validators (myself included), who had a slow start, are already unable to meet the minimum threshold required to become elected as a validator – but we’re all trying to get back in the game!

I’ll be the first to admit I had only a basic understanding of how the Celo election process worked and didn’t have anything more than a basic uptime strategy coming into the testnet launch – which is probably why we were quickly out of the running. The top validator groups figured it out quickly and were able to take advantage! So, for the rest of us, here’s how the Celo elections work and some strategies that may help you get back into the game if you’re already out or thinking about running a validator on Celo in the future. 

Celo uses an election algorithm called The D’Hondt Method. The Wikipedia page has a decent explanation and I’ll use that to demonstrate how the elections work for Celo. Celo validators currently have two areas to vote: for their group and/or for their validator. For each one to have a chance to be elected, the group and validator must have at least 10,000 (10k) cGold staked. For each validator in a group, the group leader must have an additional 10k locked in the group address (4 associated validators means 40k cGold locked).

From an election standpoint, the amount staked to a validator, as long as it’s at least 10k cGold, doesn’t really matter. What does matter is the total amount staked to the group and its validators. When an election occurs, Celo identifies the group with the highest total and elects a validator from that group first. It then starts to apply The D’Hondt Method, which means first dividing the total of the top group by two (for election calculations) then looking for the next highest total. If that first group still had the highest total, even after halving their total stake, they would elect a second validator. If not, the next group with the highest total would be elected (and their effective stake would drop by half as well). This process continues until 100 validators are elected. Each time a group has a new validator elected, their effective stake (for the election only) drops by an increasing factor. So a group with 100k would go to 50k the first time elected; the second time elected, the original total would be divided by 3 to 33k; the third time, divided by 4 to 25k and so on. If that’s confusing, I’ve got an example below:


Number of Validators Total Votes
Group A 4 700k
Group B 3 600k
Group C 2 325k
Group D 1 200k

For our test case, we’ll start off with 4 validator groups, 6 electable validator positions, and 10 total potential validators. Group A has 4 validators, Group B has 3, Group C has 2, and Group D has 1. The total number of network votes is 1,825,000 divided among the groups as seen in the chart above. An election would go as follows:


Pass 1 Pass 2 Pass 3 Pass 4 Pass 5 Pass 6 Validators Elected
Group A 700k 350k 350k 233k 233k 233k 3
Group B 600k 600k 300k 300k 300k 200k 2
Group C 325k 325k 325k 325k 162k 162k 1
Group D 200k 200k 200k 200k 200k 200k 0

On the first pass, Group A gets the top spot since they have the highest total. Group B wins a validator in pass 2 because Group A’s effective votes drop by half. Now for pass 3, both Group A and B have been halved, but Group A’s votes are still higher than the rest so they win a second validator and their original votes are now divided by 3 to 233k. In pass 4, it’s Group C’s turn to win a validator. This continues until 6 validators are elected. Some things to note: Group D does not get a validator elected! Even though they have the highest ‘per-capita’ validator (tied with Group B) at 200k. Group A actually has the second lowest per-capita votes (average votes per validator) at 175k but still elects 3 validators. 

Ok so what’s the strategy here?

First, the validator total staked (or locked) doesn’t really matter. A validator with only 10k locked can be elected over validators with higher totals as long as the sum of the group total is higher. What this means is group leaders (the only ones who can lock gold for their group) should only be sending rewards to their group address and locking it. This will allow them to add additional validators and consolidate their votes.

The D’Hondt Method favors the largest groups (as demonstrated above), so groups will probably want to consolidate their group totals by adding additional validators to improve their chance of being elected. What does this mean in the long term? Currently, transactions are not allowed between third parties, so it is up to the group leader alone to save and add validators. But once the network goes to mainnet, what’s to stop people from creating super groups of 10 or more validators? And does it matter? We’re already seeing consolidation, there were over 100 groups when the games started last month and we’re down to less than 50. This will probably be amplified on the mainnet. As these groups consolidate, will that affect the decentralization of the mainnet? And again, will it matter? If validators are free to join groups as they please they can obviously leave a group that is misbehaving. This is similar to Bitcoin mining in the sense that although there are a small number of mining pools, miners can move between pools as desired. The remaining question to be answered then is how much power do the miners actually hold? In 2017, we saw the Bitcoin miners bow to users and SegWit2x failed, will Celo users wield the same authority? Once the mainnet launches, token holders will be introduced to the mix and we will see how they chose to allocate their capital to help ensure decentralization.

So far, so exciting! For those following along, the stakeoff will continue until Feb 19th, with Phase 3 starting Feb 5th. There is some talk of expanding the number of allowed validators beyond 100 to allow those who have already fallen out of the running back in – but that remains to be seen. Additionally, the Celo foundation is performing security audits on those validators that request it for their seal of approval and bonus stake. You can take a look at the current state of the network here as well as the validators participating here.

Blockchain Learning, Proof of Stake

Terra’s Approach to Decentralization

By Connor Smith

Note: This is the fifth installment of a series detailing different approaches that blockchain networks have taken to decentralize their network. Part 1 introduced the concept of decentralization and the inter-play it has with certain aspects of crypto networks like governance, incentives, and network architecture. If you missed that article I highly recommend going back and reading it here. The subsequent articles have been examinations of the decentralization on Bitcoin, Factom, & Cosmos. If you missed those and would like to go back and read them before we dive into Terra, you may do so, here, here, & here respectively.

Hey everybody, and welcome back! Last week, we dove into our first proof-of-stake network, Cosmos, and analyzed how it has approached decentralization in its quest to build the ‘Internet of Blockchains’. In addition to assessing how decentralization has worked thus far on Cosmos, we also got into the nuts and bolts of how the underlying technologies supporting its ecosystem of interconnected, application specific blockchains (Tendermint BFT, ABCI, & IBC) work, and the modular network design of Cosmos with hubs and zones. This week, I will be picking up right where we left off and analyzing one of the networks building on top of Cosmos that is seeing major real world use and adoption, Terra

Terra aims to provide a price stable cryptocurrency, built on top of the Cosmos SDK, that will function as the infrastructure layer for decentralized financial applications. The protocol utilizes an elastic monetary supply that allows for both a stable price and the censorship resistant capabilities of Bitcoin to be maintained, enabling it to be used in everyday transactions. Currently, Terra is the backend technology powering CHAI, a mobile payments application that allows users to link their bank accounts and participate in ecommerce using Terra’s currency and receiving discounts in exchange. 

Terra is currently backed by South Korean internet giant Kakao and integrated with over 10 e-commerce platforms in Southeast Asia. The platform has seen rapid growth since launching last April, having just reached over 1,000,000 users last week, and continues to grow. With so many cryptocurrencies struggling to break into the commercial sector and seemingly every new year being the year we will finally start to see adoption, this is certainly no trivial feet. So now, without further ado, let’s dive into Terra and see how this network has approached decentralization on their quest to become the largest payments platform in Asia!

So What is Terra and How Does it Work?

Before we dive into the technical underpinning of Terra, the problem it solves, and its approach to doing so, it will help to first have some context regarding the rather unconventional background of its founders and some of the history leading up to its launch. Work on the project commenced in April of 2018, led by co-founders Daniel Shin and Do Kwon. Kwon had previously worked as a software engineer at Apple and Microsoft, in addition to being founder and CEO of a startup called Anyfi that attempted to use peer-to-peer mesh networks to try and create a new, decentralized internet. Shin was a successful serial entrepreneur, having built and sold multiple e-commerce companies in East Asia, and, at the time, was CEO of his most recent startup, TicketMonster, the leading e-commerce platform in Korea. Leveraging their extensive backgrounds in e-commerce and distributed systems, the pair sought to create a modern financial system built on-top of a blockchain that could be used by people to make everyday payments. The two believed that the major roadblocks to adopting cryptocurrencies largely stemmed from the extreme price volatility and lack of a clear path to adoption that most networks exhibited. Thus, they designed Terra to be a price-stable, growth-driven cryptocurrency that was focused on real world adoption from day one. Leveraging Shin’s deep connections in e-commerce, they formed a consortium of e-commerce companies known as the Terra Alliance. Within a few months of launching, 15 Asian e-commerce platforms had joined that represented a total of $25 Billion in annual transaction volume and 40 million customers. This coincided with a $32 million seed round investment the team raised from some of the world’s largest crypto exchanges and top crypto investment firms. Having a war-chest of funding in place and real world partnerships aligned, the team was off to the races as they started building the project and integrating it with e-commerce platforms.

Seeing as Bitcoin launched over a decade ago as a peer-to-peer electronic cash system, you may be wondering why a protocol like Terra was still tackling the issue of digital payments. This is largely because Bitcoin and other cryptocurrencies exhibit significant price volatility, making consumers nervous whether it will maintain its value when they try to transact with it later. For perspective, Crypto markets can fluctuate 10% or more in either direction on any given day, and in late 2017 Bitcoin was trading at nearly $20,000/BTC and less than two months later was trading between $6000 – $9000 (at the time of writing this article Bitcoin is trading at $8415.65). Terra, was far from being the first or only project to realize that price volatility posed a significant barrier to crypto adoption. Attempts at creating a price-stable cryptocurrency, or stablecoin, date back as far 2014 with BitShares, and have proliferated at a momentous rate in the wake of the volatility exhibited in the last crypto bull market in late 2017. 

Stablecoins are exactly what their name suggests, a cryptocurrency designed to be highly price-stable in respect to some reference point or asset and maintain the following three functions of money: a store of value, a unit of account, and a medium of exchange. While this sounds fairly intuitive and straightforward, the engineering behind these instruments is quite difficult with no agreed upon approach. Cornell University attempted to codify the different approaches some networks are taking, and put forth a paper classifying the different design frameworks for stablecoins, which can be found here. The finer nuances and mechanics of each approach exceed the scope of this article, but the study revealed that most stablecoins maintain price using one of the following mechanisms: a reserve of pegged/collateralized coins or assets, a dual coin design, or algorithmically. 

Maintaining a reserve of a pegged or collateralized asset allows the organization controlling the stablecoin to maintain price by incentivizing users to expand or contract the supply until it returns to its pegged price. Users are able to earn money by expanding the supply when the price is high and redeeming when it is low through arbitrage until the opportunity disappears and the price has equilibrated. The dual coin approach is where a network implements a two token system in which one coin is designed to absorb the volatility of the first through a process known as seigniorage. This is where the secondary coin is auctioned in exchange for the stable coin if it dips below the peg and the proceeds are burned to contract the supply and stabilize the price. Conversely, if the price of the stablecoin is above that of the peg, new coins will be minted to those holding the secondary coin to expand the supply and level the price. Lastly, the algorithmic approach uses complex algorithms and quantitative financial techniques to adjust the currency price as needed without any backing of pegged or collateralized assets. Hence, it behaves analogously to a traditional cryptocurrency in the sense that a user’s balance and outstanding payments vary proportionately with changes in the market cap of the coin, but it provides a more stable unit of account. 

Terra utilizes a dual coin approach in which the transactional currency, Terra, represents an ecosystem of cryptocurrencies pegged to real currencies like USD, EUR, KRW and the IMF SDR, and Luna is the secondary coin that absorbs the volatility.  All of the Terra sub-currencies (TerraKRW, TerraUSD, etc.) can be swapped between one another instantly at the effective exchange rate for that currency pair, allowing the network to maintain high liquidity. Since the prices of these fiat currencies are unknown to the blockchain natively, a network of decentralized price oracles are used to approximate the true value of the exchange. Oracles, in this context, are essentially trusted data sources that broadcast pricing data generated from currency exchanges onto the network. They vote on what they believe the true price of the fiat currencies to be, and, so long as they are within one standard deviation of the true price, are rewarded in some amount of Terra for their service. Should the price of Terra deviate from its peg, the money supply is contracted or expanded as needed using a seigniorage method similar to that described above. Hence, oracles mining Terra transactions absorb the short Term costs of contracting the supply and gain from increased mining rewards in the mid to long term. 

Luna and the Critical Role is has in the Tokenomics & Governance of Terra

However, since Terra is a proof-of-stake network, oracles must have stake in the network in order to be able to mine Terra transactions. This is where the second token, Luna, comes in. Luna is the native currency of the protocol that represents the mining power of the network, and what miners stake in order to be elected to produce blocks. Luna also plays a critical role in defending against Terra price fluctuations by allowing the system to make the price for Terra by agreeing to be a counterparty for anyone looking to swap Terra and Luna at the exchange rate. In other words, if the price of TerraSDR << 1 SDR, arbitrageurs can send 1 TerraSDR to the system for 1 Luna and vice versa. Thus, miners can benefit financially from risk-free arbitrage opportunities and the network is able to maintain an equilibrium around the target exchange rate of Terra irrespective of market conditions. Luna is also minted to match offers for Terra, allowing for any volatility in the price of Terra to be absorbed from Terra into the Luna supply. In addition to the transaction fees validators collect from producing blocks, the network also will automatically scale seigniorage by burning Luna as demand for Terra increases. As Luna is burned, mining power becomes scarcer and the price of Luna should theoretically increase. This scales with the transaction volume and demand on the network, allowing miners to earn predictable rewards in all economic conditions.

In the most recent update for the protocol (Columbus-3) on December 13, 2019, Luna gained even more utility in the Terra ecosystem by allowing its holders to participate in on-chain governance. Luna holders can now submit proposals for parameter or monetary policy changes to the network as well as make general text proposals and request funds from the community pool (a portion of the seigniorage tokens available to fund community initiatives). If a proposal receives a super majority of supporting votes, the proposal will be ratified and changes made. Not only does this extend to the functionality of Luna, but it also opens up the user base of individuals who can actively participate in the network. Before the Columbus-3 update, Luna only added value to miners on the network, but now anyone can purchase Luna and use it to participate in governance. Moreover, Luna transactions are tax free so it is even easier for non-miners to acquire to participate on the network. 

Terra also has a governmental body known as the Treasury that is designed to allocate resources from seigniorage to decentralized applications (dApps) being built on top of the platform. After registering as an entity on the Terra Network, a dApp can make a proposal to the Treasury for funding, and Luna Validators may then then vote on whether to accept or reject the application based on its economic activity and use of funding. Should the application receive more than ⅓   of the of the total available Luna validating power, it will be accepted and the Treasury will allow the dApp to open an account and receive funding based on the proportional vote it received from Luna validators. The Treasury ultimately determines how funds are allocated to dApps, but, if the community feels the firm is not delivering results, validators can vote to blacklist the dApp. Ultimately, this tiered governance structure is designed to provide Luna holders with a way to determine what proposals and organizations receive funding based on the highest net impact it will have on the Terra economy.

So How Decentralized is Terra?

As of writing this article, there are currently 61 validators located around the world on the Terra Columbus 3 mainnet (We at Consensus Networks are on this list and have been active validators since Terra first launched this past April!). While only about ⅓ of the size of the number of validators on its parent network, Cosmos, this is still a fairly impressive degree of physical decentralization when considering Terra underwent a fully decentralized launch and has been concentrating on integrating with e-commerce platforms exclusively in Southeast Asia. However, as of the writing of this article, the top 9 validators control 62.8% of the voting power on the network. So, similar to what was observed last week with Cosmos, a very small handful of network participants control the majority of economic and governance resources.

However, what is less clear, is if this centralization of resources has as significant of consequences on a growth focused stablecoin network like Terra. For example, Seoul National University’s blockchain research group, Decipher, conducted an in depth study on Terra that concluded it exhibits much greater price stability than other popular stablecoins like USDC or USDT. Terra has also on-boarded 14 online e-commerce platforms and over 1,000,000 users onto its payments application, CHAI, resulting in over $130 million being processed by the network to date. They have also begun expanding outside of South Korea into areas like Mongolia and Singapore. Given that Terra’s mission was to be a price-stable cryptocurrency with a clear path to market, it objectively appears that they have been successful in their goal thus far (Especially when considering that the mainnet has been live for less than a year). With validators receiving rewards in two forms (transaction fees and Luna burn), Terra has created a rewards structure that is predictable under all economic conditions for validators, giving them little to gain from colluding in an attempt to undermine the network. 

Yet, the recent additions of on-chain governance in Columbus-3, Luna being listed on more exchanges, and Luna receiving a tax exempt status on transactions introduces new layers of complexity to the Terra ecosystem that could pose a threat to the decentralization of the network. Now, anyone can vote on proposals that affect Terra’s  future trajectory at both a governance and functional level. When considering that proposals on the network require a supermajority of votes to pass, the threat of collusion between a handful of parties controlling most of the resources now poses a much greater threat. For example, if the top 9 validators were to collude and try to pass a proposal that benefited them at the expense of the network, they would only need to acquire roughly 4% more of the voting power to reach the supermajority needed approve it and change the protocol at a functional level. Additionally, given Terra’s adoption-driven growth model, there are now a whole new range of stakeholders that must be factored into the ecosystem like e-commerce platforms and users of Terra-based applications. While still unclear how this will evolve over time, effectively anticipating and designing for these new dynamics is one of the primary focus areas of the team moving forward, as can be seen here

Given the major shifts in how the protocol operates and the massive influx of new stakeholders, it is far too early to speculate on how Terra’s approach to decentralization will proliferate into the future. Regardless, the fact remains that Terra’s adoption-driven approach to growth has made it one of the few cryptocurrencies that has started seeing demonstrable real world use to date. Having recently hired Uber’s former Head of Strategy, Rahul Abrol, to spearhead their international growth efforts, Terra and CHAI has a very realistic chance of achieving their goal of becoming the leading payments platform in Asia in the years to come. Thank you for reading, and I hope you enjoyed the article! Tune in next week as we explore the other massive project looking to create the internet of blockchains, Polkadot, and its Canary Network, Kusama

Blockchain Learning, Proof of Stake

Cosmos’s Approach to Decentralization

Note: This is the fourth installment of a series detailing the approaches that different blockchain networks have taken to decentralize their network. Part 1 introduced the concept of decentralization at a high level, the role it plays in crypto networks and some of the different factors inherent to a protocol that influence how decentralization will propagate at scale and influence participation. Parts 2 & 3 then provided in-depth looks at Bitcoin and the Factom Protocol in addition to their approaches to decentralization. If you missed any of these articles and would like a refresher before we dive into the decentralization of Cosmos, you may do so here, here, & here respectively.

Hello everyone and thanks for joining! Over the last few weeks, we have talked quite a bit about decentralization and the different mechanisms that blockchain networks use to influence participation and dispersion of their network participants. We took a nice trip down memory lane to examine Bitcoin’s approach over the last decade, followed by fellow veteran protocol Factom. In this week’s article, we’ll dive into Cosmos, the first Proof-of-Stake (POS) protocol in our series. While conceptualized as a white paper since 2016, Cosmos was just launched on March 13, 2019. The full history leading up to this event is too long to recount in this article, but I highly recommend checking it out here, as it involves some cool blockchain history. For example, one of the organizations behind the Cosmos Network, the Interchain Foundation (ICF), raised nearly $17 million in 30 minutes for the project via an ICO and it is regarded as one of the most successful fundraising events for a blockchain-related project ever. Cosmos may be slightly less than a year in age, but it is certainly one of the most ambitious blockchain projects to emerge thus far. Cosmos is creating what many have termed the ‘Internet of Blockchains’ or ‘Blockchain 3.0’, which is an ecosystem of interconnected blockchain protocols that are interoperable and can seamlessly communicate with one another. 

What is Cosmos and How Does it Work?

Beginning with Ethereum in 2014, developers started experimenting with ways to implement Bitcoin’s underlying blockchain technology into other applications outside of digital money. Specifically, many were interested in using blockchain for stateful applications that could handle more robust computer logic rather than a stateless ledger. This led to innovations like smart contracts and gave rise to the notion of decentralized applications (dApps). As developers started exploring new uses for blockchain technology an explosion of new, highly-specific crypto networks ensued. Consequently, people began to realize that one of two things was going to happen if the decentralized web was going to work: One school of thought was that everything would resolve to one or a handful of ‘fat protocols’ capable of handling all the compute needs of the different applications built on top of them with universally agreeable tokenomics. The opposing view was that there would be a myriad of different blockchains, some application-specific and others comprehensive, that would be interoperable with one another so people could transact across them as needed. 

Cosmos was birthed out of the latter ideology and is essentially applying a microservices architecture to create a base-layer network that connects an ecosystem of application-specific blockchains. To achieve this goal, the teams working on Cosmos decouple the different components of blockchain networks (consensus, networking, & the application) and provide them piecewise in a ‘plug-n-play’ fashion to the protocols building their blockchains on top of Cosmos. Hence, Cosmos, while a blockchain protocol in-itself, acts as a platform wherein other blockchains can be built using a suite of customizable, shared technologies and Cosmos handles properly managing state and transactions across the different sub-networks.

The first component that makes this possible is the Tendermint  BFT Consensus Engine. Tendermint BFT is an open-source technology that combines the networking and consensus layers of blockchains into a generic byzantine fault-tolerant ‘consensus engine’ that can easily be plugged into any application to create a proof-of-stake blockchain. Being byzantine fault tolerant means that up to ⅓  of the nodes in the network can arbitrarily fail or be overcome by a malicious actor, yet it will still operate as intended. If you would like more information regarding byzantine fault-tolerance or the byzantine generals problem, the original paper on the topic is available here. Proof-of-stake is an alternative consensus mechanism to proof-of-work that replaces mining with the notion of skin in the game via stake, or the amount of a network’s cryptocurrency a user owns and invests into the protocol. The larger a participant’s stake, the more likely that participant is to be elected to produce blocks and be rewarded. This is analogous to the relationship between a Bitcoin miner’s investment in computing power and his or her likelihood of mining a block, but requires significantly fewer computational resources and allows for faster transaction times. Tendermint founder Jae Kwon was actually the first person to prove and demonstrate that byzantine fault tolerance worked for proof-of-stake networks, so for more information, I invite you to check out the paper he released on the matter here.

Tendermint BFT allows developers to save time and effort setting up the networking and consensus components of their proof-of-stake networks so that they can instead focus almost singularly on the application layer. Moreover, Tendermint BFT is language agnostic and has clients in Golang, Javascript, Rust, & other popular programming languages, allowing developers to build their blockchain on top of Cosmos with ease. The engine then communicates with the application layer through a socket protocol known as the Application Blockchain Interface (ABCI) as may be viewed below.

Cosmos also provides blockchains building on top of it with a generalized application development framework (the Cosmos SDK), that makes building secure applications on top of Tendermint BFT using ABCI much easier and quicker. The SDK comes with a set of pre-built modules that allow developers to build applications without having to code everything from scratch with built-in security and separation of concerns. Hence, they simply use the components they need and can accelerate the time it takes for them to launch their blockchain. 

Once a developer has launched their application-specific blockchain, he or she can then communicate and transact with other blockchains in the Cosmos ecosystem via the Inter-Blockchain Communication Protocol (IBC). Each chain in the Cosmos ecosystem is heterogeneous, meaning that they all have fast-finality consensus provided by Tendermint BFT and their own set of Validators, differing only in how they implement these components. Validators are the node operators in Proof-of-Stake networks, like Cosmos, who are responsible for participating in consensus, producing blocks, and governing the network. Anyone can set up a validator and compete to be in the active set or subset, of validators selected to participate in consensus. Before launch, a proof-of-stake network will determine how large it wants its active set to be and then validators fill that set based on the magnitude of their stake after launch. Cosmos’s consensus mechanism then selects participants probabilistically based on the amount of stake they have relative to the other validators to solve blocks and be rewarded. Validators can also suggest and vote on governance proposals for the network. Users on the network who lack the funds or stake required to be in the active set may instead stake their funds with a validator in the active set and receive a share of the rewards that a validator receives proportional to the amount they have staked minus a fee cut from the validator.

Each application-specific blockchain represents a zone, which then connects to a hub (in this case Cosmos) via IBC connections. The Cosmos Hub is specifically designed to serve as the connection point for all of the applications, or zones, built within its ecosystem and carry out seamless transfers between them via the IBC, as seen below:

Cosmos has designed its ecosystem such that there can eventually be multiple hubs connected via IBC so that the network can continue to operate at thousands of transactions per second at scale. Additionally, they plan to incorporate Peg Zones, or blockchains, whose sole responsibility is to track the state of another chain. This Peg Zone will serve as a bridge to other non-Tendermint networks like Ethereum or Bitcoin and allow them to communicate with Cosmos via IBC. Hence, the end vision for Cosmos is an ecosystem of interconnected blockchains that looks as follows:

Governance on Cosmos

Cosmos was designed with a tiered governance structure that is open to the entire community but leaves ultimate voting responsibility within the hands of those supporting the network (i.e the validators). There is a constitution that states how Cosmos is governed and how proposals for network updates are to be made. Anyone holding the network’s native currency (the Atom) is eligible to make a governance proposal and submit it for voting. The proposals can range in focus and magnitude and cover everything from changing how governance occurs on the network to overhauling aspects of the networks codebase that modifies functionality.

The one caveat is that at least 512 Atoms must be deposited toward the proposal by either the member who submitted it or the community broadly. At the end of two weeks, the proposal is either dismissed if it does not receive sufficient backing or enters into a new two week voting period where the community discusses its implications and submit a vote of either “yes”, “no”, “no with a veto”, or choose to “abstain”. Only staked Atoms count towards governance and the relative weight of a participant’s vote is based on their stake in the network. Hence, Validators with larger stakes hold more voting power. Atom holders who choose not to run a validator can stake their Atoms to any validator they feel represents their interest and inherit their vote. A proposal will only be accepted if over 40% of staked Atoms participate in the vote, over 50% of the vote is in favor of the proposal, and less than a third elected to veto the proposal. If successful, the proposal is integrated into the software that runs the network by the core developer team and the validators must coordinate an update to reflect the changes.

More on the finer details of how the governance process works can be found in the Cosmos white paper here, or in this article written by fellow Cosmos Validator Chorus One. It is also important to note that each zone built on Cosmos has its own separate constitution and governance process. Therefore, governance decisions made on the Cosmos Hub do not necessarily dictate how other zones operate. However, if any changes are made to either Tendermint BFT or the Cosmos SDK as a result of the governance update, zones will likely need to update their own networks if they are using either of these technologies.

So How Decentralized is Cosmos?

Cosmos Hub 1 (version 1 of the mainnet)  launched on March 13, 2019, and was one of the first proof-of-stake networks to successfully carry out a fully decentralized launch. This is in contrast to other successful proof-of-stake network launches, like Tezos, which had a group of Foundation Validators who controlled the initial mainnet. The initial active set was capped at 100 possible validators, and 75 of these slots were filled at genesis. This meant that there were 75 different groups distributed around the world working together to coordinate the launch of the network and start actively validating transactions. However, while Cosmos may have had an impressive initial degree of decentralization based on the number of unique validators, the true magnitude of that decentralization was far more nuanced.

Within the first week after the launch of the Cosmos mainnet, the top 5 Validators controlled nearly 50% of the staked Atoms on the network. Moreover, many of these participants had been significant buyers in the Cosmos coin offering. This illustrated that, while Cosmos may have been decentralized in terms of the number of validators, the actual distribution of power and resources within the network told a far different, more centralized story. For the next three months, 15 or fewer validators controlled ⅔ of the staked atoms on the entire network at any given time (per https://twitter.com/cosmosdecentral). In October 2019, Gavin Birch, community analyst at Figment Networks, released a report detailing the negative repercussions that coupling governance with the validator role was having on Cosmos, which may be found here. In summary, the report revealed that validators were exploiting the Cosmos incentive structure to gain undue influence over network governance at the expense of other validators. Some validators were running with zero fees, meaning that 100% of the rewards they received would be distributed to those staking with them. Naturally, these zero-fee validators received a disproportionate amount of the stake as atom holders sought to maximize their returns. This forced some validators to shut down, due to their inability to continue competitively and earn rewards sufficient enough to offset the cost of their infrastructure.

Many community members grew concerned over what this meant for the future of Cosmos, as this behavior posed a major security vulnerability to the network. If validators continued to exploit the network in this fashion, it would drive away those that could no longer afford to run a validator and decrease the level of physical decentralization across the network. Thus, it would be easier for malicious actors to overtake the network. Thus, on December 11, Cosmos Hub 3 was launched based on a governance proposal that aimed to reconcile some of the alignment issues between network governance and incentives. The first major change was that the active set of validators was expanded from 100 to 125 validators, allowing for more participation in the network. The second major change was a redefining of the entire governance structure. Prior to the upgrade, voting was only a signaling mechanism. The result of a vote would have no immediate consequences other than telling the core development team what changes needed to be made to the network based on the proposal. The core developers would then release a software update, all of the validators would coordinate when to perform, and the new network would be restarted (a process known as a hard fork). With the release of Cosmos Hub 3, the network now features on-chain governance. This means that the result of a vote will automatically elicit changes to the codebase to the appropriate parameters and alter how the network behaves without having to perform a hard fork.

Instituting on-chain governance drastically shifts the incentives for network participants to vote and provides a mechanism by which delegators (those staking to other validators) can more actively participate in governance. While they still have the option to inherit the vote of the validator they are staked to, delegators can also elect to instead override that vote with their own if they disagree with how the network should proceed. This helps galvanize nominators to play a more active role in governance which helps balance the economic and governance incentives of the network while maintaining a relatively stable degree of physical decentralization. Additionally, there has been a proposal made to fund a governance working group to work alongside protocol development and help address emerging governance issues. There were several other major updates in Cosmos Hub 3, like the introduction of a rewards pool from which network participants can vote on proposals to fund projects that enhance the ecosystem. The full proposal may be found here.

It is still too early to say if expanding the active set of validators and introducing on-chain governance was the right answer to solving the centralization of power and economics on Cosmos. Figment Networks’ December Cosmos update revealed that the voting power of the bottom 90% of validators on the network had increased 3% from November. However, it also found that the top 10 validators on the network controlled 46% of the voting and consensus power. This analysis was only done between the top 100 validators in December to maintain consistency from November, so this month’s analysis will better reflect the impact that increasing the validator set to 125 had on the decentralization of the network. And who knows, with Cosmos slated to conduct Game of Zones in the coming months to refine their inter- blockchain communication module, a slew of major updates are likely to happen this year that will fundamentally change how the network operates. Thanks for reading and join me next week as we dissect the decentralization of one of the fastest growing networks in South Korea and a zone within the Cosmos ecosystem, Terra!

Blockchain Learning

Factom’s Approach to Decentralization

By Connor Smith

Note: This is the third installment of a multi-part series detailing the approaches that different blockchain networks are taking towards decentralization. Part 1 introduced the concept of decentralization at a high level, the role it plays in crypto networks, and some of the different factors inherent to a protocol that influence how decentralization will propagate at scale and influence participation. Part 2 then provided an indepth look at Bitcoin and its approach to decentralization. If you missed either of these articles and would like a refresher before we dive into the decentralization of the Factom, you can do so here and here respectively.

Hello again and I hope everyone’s 2020 is off to a good start! In this week’s article, I will be diving into how Factom has approached decentralization. Conceptualized in 2014 and released the following year, Factom is one of the more veteran protocols still in use today. After observing the speed, cost, & bloat limitations developers experienced when building applications on top of Bitcoin, Factom was released as a developer friendly way to secure information into the Bitcoin and Ethereum Blockchains without having to transact with those networks directly. Factom is designed to help ensure data integrity and has been used to secure data to the blockchain for the likes of the Department of Energy, Department of Homeland Security, and the Bill and Melinda Gates Foundation to name a few. Most recently, Factom has seen use as the base layer network for PegNet, a decentralized CPU-minable stablecoin network that allows users to convert between a network of pegged assets (crypto and real world) for less than one-tenth of a cent. So without further ado, let’s get into nuts and bolts of how Factom works!

Factom Overview:

Factom is essentially a collection of blockchains that immutably record any form of data in a very structured, accessible way. A user simply creates a chain for a topic and then writes data to that chain, where it is recorded as transactions in its blocks. This data is then secured to the Factom blockchain to leverage the power of the overall network. Factom is composed of several layers of data structures that are hierarchical in nature. The highest layer data structure is called the Directory Layer, which organizes the Merkle Roots of the Entry Blocks. Basically, this layer is just a hash generated from all of the entry blocks plus their corresponding Chain IDs. The next layer down is the Entry Block Layer itself which holds reference pointers to all of the entries with a particular Chain ID that arrived within a given time. Underneath the Entry Block Layer comes the Entries themselves, which are the raw application data written to Factom. Lastly, come chains, which are a grouping of entries for a particular application or topic in an application. An image of how all of these different layers interact may be viewed below. In short, application data is organized into chains, which are added to entry blocks and hashed into the Directory Layer to be secured by Bitcoin and Ethereum.

Image from the Factom White Paper

Factom is designed to be extremely developer friendly. Instead of requiring developers to use a protocol native language like Etheruem does with Solidity, Factom offers a collection of client libraries accessible through APIs for the following commonly used programming languages: Javascript, Python, C#/.Net, GO, Java, and Rust. As mentioned above, Factom is anchored to the Bitcoin and Ethereum networks, so every time a block is added to the Factom blockchain, the data is also immutably recorded on those networks. 

Image from https://blog.factomprotocol.org/factom-in-regulated-industries/

Factom utilizes a two token model in which there is the token associated with the protocol (the Factoid) and another token used to submit entries to the network known as the entry credit. The Factoid, like other cryptocurrencies, is price sensitive and varies with the market over time. The entry credit, by contrast, maintains a fixed price of a tenth of a cent and may only be used to submit entries to Factom. This allows developers and enterprises to interact with the Factom blockchain at a stable, predictable price while still leveraging the hash power of more price volatile networks like Bitcoin and Ethereum. To transact on the network, developers use Factoids to purchase entry credits that are in turn used to submit application data to the blockchain. The application then records an entry to the Factom blockchain and Factom servers will create appropriate entry and directory blocks. Factom then secures an anchor, or hash of the directory block, to Bitcoin and Ethereum. An overview of the architecture of how this process looks in practice may be viewed below.

Image from the Factom White Paper

Factom Network Architecture, Governance, & Incentives:

Nodes on the Factom Network are split into two classes of servers in an attempt to decouple the two rolls that Bitcoin Miners essentially play: recording entries in a final order and auditing the validity of entries. Hence, there are federated Factom servers and auditing servers. Federated servers are those responsible for accepting entries to a chain on Factom, assembling them into blocks, and then fixing the order of all of the entries across the network. Roughly every 10 minutes a block for all entries on the network is recorded to the Factom Blockchain by the Federated servers and a hash of the data is then inserted into the anchors on to Bitcoin and Ethereum. Auditing Servers, by contrast, simply audit all entries made on the network. The verification of all entries is done client side, enabling audit servers to perform their job in either a trusted or trustless manner depending on the client’s needs and level of trust a priori. 

Managing all of the chains on the Factom Network is no simple task. Since any application can have as many independent chains as needed to secure different data sources, there are thousands of chains and entries that must be validated, written to the Factom blockchain, and then propagated to Bitcoin and Ethereum. A network of independent, globally distributed Authority Nodes bear this responsibility on the Factom Protocol. These Authority Servers are the set of federated and audit servers that essentially operate Factom. The federated servers in the Authority Set are responsible for ordering entries made to the network, and the audit servers in this set duplicate the work of the Federated servers and will take over in the event a federated server fails. To ensure no one party in the authority set has too much power, each server is only responsible for a small part of the system, servers double check the work of other servers, and servers cycle responsibilities every minute or so. Only a small group of trusted, community elected parties are permitted to run Authority Servers, so the network is able to record entries quickly using a variant of a Proof-of-Authority Consensus Mechanism.

Those running Authority Servers are known as Authority Node Operators (ANOs), and are the network participants who benefit economically in the Factom Ecosystem. They are also the parties responsible for governing Factom, and thus hold considerable power within the Factom ecosystem. Unlike Bitcoin, however, where anyone can mine and increase their relative economic power by investing in more infrastructure, ANOs must be elected by the community of existing ANOs. This process is done typically once per year and there will only ever be 65. All governance decisions made by the ANOs are done off-chain. A thorough explanation of Factom’s governance process may be found here, but is generally as follows:  The community will draft a document for any change it is considering making to the network. A ‘Major Timed Discussion’ is then opened on Factomize, the community’s forum channel, and members have 8 days to voice their opinions or concerns over the proposal. At the end of the discussion, a vote is made by the ANOs on whether to implement the change or not. If the vote passes it will be sent to the democratically elected legal committee for review, and then added to the governance documents.

In exchange for running the infrastructure that supports the network and playing an active role in governance, ANOs are compensated in Factoids. They are rewarded 2246 FCT per month minus their efficiency. Efficiency refers to the percentage of their FCT that ANOs forego to the community grant pool and is related to the amount of work an ANO contributes to Factom outside of infrastructure services. A full breakdown of how ANOs are compensated with examples may be found here, but tends to take the following form: An ANO only providing infrastructure and governance services is expected to operate at around a 60% efficiency, meaning they contribute ~60% of their total possible rewards to the grant pool to fund core protocol work, marketing, or other projects that use or promote Factom. Conversely, if an ANO intends on providing these types of services to the network in addition to other responsibilities, they can set a lower efficiency and be compensated more for their efforts.

So How Decentralized is Factom?

There are currently 28 independent Authority Node Operators who are supporting the Factom Protocol and participating in its governance. While primarily located within the U.S or Europe, the different firms serving as ANOs are distributed around the globe and are comprised of a mix of Factom-specific companies and ledger-agnostic entities. The parties acting as ANOs represent a host of different interests and skill sets ranging from the protocol associated corporation that primarily maintains the core codebase, Factom Inc., to consulting and investment firms like VBIF, and everything in between. (We at Consensus Networks are ANOs for Factom as well!) From this network of ANOs, there are 5 Guides who were elected by the broader Factom community to help implement a governance framework and ensure it runs efficiently. Guides do not inherently possess any greater power or influence, but rather help ensure that ANOs are up-to-date on what governance matters need to be attended to and that all there are democratic, efficient procedures in place for ANOs to use to do so. 

While 28 nodes itself is not a large number of authority nodes when compared to thousands of nodes on networks like Bitcoin and Ethereum, it has proven sufficient for the needs of a data layer protocol like Factom in terms of network uptime and speed. It is also important to note that even though only 28 nodes are maintaining the network, the utility of the Factom Protocol is still accessible to anyone. For example, a group of ANOs provide and maintain an Open Node that any developer around the world can utilize to interact with the Factom Protocol via an API endpoint. The most recent network update for Factom revealed that in excess of 150 million requests had been made to this node during the one month period from Nov. 13 – December 13. Additionally, anyone, ANO or not, is eligible to apply for grants to fund their Factom based projects. Seeing as ANO elections tend to occur roughly every 6-12 months, it is reasonable to suspect that there will likely be another election round sometime during 2020. The exact number of ANOs onboarded varies from round to round, but would likely fall somewhere between 4-6 new operators based off of the number of new ANOs brought on each of the last two rounds. 

Despite the small cohort of ANOs governing the protocol, they represent quite a diverse range of interests and have varying opinions about what constitutes proper ANO activities and the future direction of the protocol. If you go through nearly any of the major timed discussions on Factomize, you will see healthy discourse over various proposals with multiple viewpoints represented across the different ANOs. To better incentivize ANOs to act in the long-term interests of the network, increase their activity within the community, and prevent them from ‘free-riding’ if elected into the group of 65 ANOs, procedures for ANO promotion and demotion were recently instituted into the governance documents of the network. Hence, the community can better voice their approval or disapproval of an ANOs performance and now has mechanisms in place to punish poor ANO performance. If the community collectively decides that an ANO is not carrying out their responsibilities effectively for two quarters, they can elect to demote them from being an ANO. Ultimately this provides another layer of checks and balances into the power structure of network participants. 

Only time will tell if Factom’s open approach to decentralization where no single community member is inherently more powerful than another and all decisions relating to the protocol are made via a democratic process and voted on by all ANOs is successful. With PegNet recently being listed on exchanges and Factom seeing use in the US HOA industry by Avanta Risk Management and for expediting Banking Regulatory Compliance in the UK by Knabu, 2020 is looking to be a bright year for this veteran protocol. I hope you have enjoyed this article and found it informative! If you have any questions or comments please feel free to leave them below. Next week I will be continuing this series and diving into Cosmos, an ambitious project attempting to create the ‘Internet of Blockchains’ so stay tuned and thanks for reading!

Blockchain Learning

Bitcoin’s Approach to Decentralization

By Connor Smith

Note: This is the second part of a multi-part series in which I will examine the approaches that different blockchain networks are taking towards decentralization. Part 1 introduced the concept of decentralization at a high level, the role it plays in crypto networks, and some of the different factors inherent to a protocol that influence how decentralization will propagate at scale and influence participation. If you missed that article or would like a refresher before we dive into decentralization of the Bitcoin Network, you can do so here

With 2019 having just come to a close along with the decade, it seems like an apt time to be reflecting on Bitcoin’s approach to decentralization and how it has played out. Having entered the decade as little more than a whitepaper and an open source protocol with a handful of users and contributors, Bitcoin enters the 2020s as the best performing asset of the 2010s. Bitcoin was the first true cryptocurrency and blockchain network, and hence the longest lived. There was no blueprint for Bitcoin to follow, or generally accepted framework for launching a blockchain protocol back in 2009. The Bitcoin network we see today is a true first stab at a decentralized network for transacting value and serves as the longest running experiment for us to learn from. All blockchain protocols that have come since have been influenced by Bitcoin, mimicking certain aspects of it and trying to improve upon others. Hence, Bitcoin serves as a great foundational network to begin our examination of decentralization, as it will help you to better understand why the other networks we will explore have made the choices they did.

An Overview of How the Bitcoin Network Works

One of the most important things to understand about Bitcoin, and most commonly overlooked, is that none of the base technologies used in the creation of Bitcoin were inherently novel.  Elliptic curve cryptography, distributed systems, and the idea of proof-of-work had been well established well before Bitcoin. Even the concept of digital cash had been tried as early as the 1990s with David Chaum’s DigiCash. What was novel about Bitcoin’s design was how it combined all of these elements to solve the Double Spend Problem and built an incentive structure around it that allowed digitally native value to be securely transacted across a peer to peer network in a way that was agreed upon through a Proof-of-Work Consensus Algorithm. For those of you who might be new to crypto, I will provide a brief overview. However, I highly recommend reading the Bitcoin Whitepaper if you want a more in-depth explanation.

In short, Bitcoin’s Proof-of-Work Mechanism works as follows. Each block created on the Bitcoin Blockchain has a cryptographic hash associated with it that is generated from its index, timestamp, the block data (Bitcoin transactions), the hash of the previous block, and what is known as a nonce. The nonce is some value that will always ensure that the leading bits of the block hash will always be zeros. There is no way for miners to know what the nonce will be a priori, so they compete to solve it and ultimately the block hash. Moreover, as the compute, or hashing, power on the network increases, so does the difficulty of guessing the correct value. The hash difficulty of the network is periodically adjusted to take 10 minutes on average and once the correct nonce is computed it is broadcast to the network, the miner who calculated it is rewarded, and consensus is reached.

Image from https://en.bitcoinwiki.org/wiki/Bitcoin_transaction

As you can see, miners play a very critical role in the Bitcoin ecosystem. They’re the ones responsible for adding blocks to the Bitcoin blockchain and are compensated for contributing their compute power and helping secure the network. Yet, miners are just one actor within the Bitcoin protocol. Network participants also have the option to run either a full node or a light node on the network. Full nodes are responsible for hosting a copy of the entire Bitcoin ledger and verifying the authenticity of its transaction history all the way back to the genesis block. They simply maintain and distribute the most trusted version of the blockchain to other nodes on the network, thus not requiring the same computational resources as miners. Light nodes play a similar role to full nodes, but instead of keeping a full version of the ledger, they download the block headers to validate the authenticity of transactions. They are often peered to full nodes to further decentralize the network or can be used to help restore a full node if it is corrupted.

Aligning Incentives: Governance and Reward Structures

Let’s examine how Bitcoin attempted to balance the economics and power of stakeholders while decentralizing the network. Using the tiered network architecture described above, Bitcoin sought to create a bifurcation between how network participants are rewarded and how governance occurs. When Bitcoin was first released and blocks were CPU minable anyone could run a miner and be rewarded for contributing their compute power to secure the network while also running a full node. Given the grassroot beginnings of Bitcoin, it made sense that the small group of early adopters should be rewarded economically and have the responsibility of participating in governance. However, as the difficulty of solving blocks increased, miners started utilizing computationally superior ASICs. This made the barrier to profitable mining much steeper as just one top of the line miner cost in excess of $1000, leaving only a small subset of network participants willing to make the necessary capital investment. This high barrier to entry looks like it will continue to manifest and reduce the number of individual and decentralized miners participating on the network.

Without all network participants being able to truly compete in the mining process, there is some inherent risk of a 51% attack. This is where an individual or group of participants are able to take control of the majority of the network hashing, or computing, power to prevent transactions and determine what blocks are added to the blockchain. While in control of the economics of network, miners do not inherently hold any special authority, nor are they required to participate in governance responsibilities. Seeing as Bitcoin is an open source project, anyone running a node on the network can theoretically propose changes to the codebase that alter how transactions are validated, arrive at consensus, etc. A more thorough examination of how Bitcoin governance works may be found here, but the process is generally as follows: A user conducts research to solve some problem with Bitcoin. Once they have a solution they notify all other protocol developers, typically via a Bitcoin Improvement Proposal (BIP). After the proposal has been made, other interested protocol developers begin implementing and testing it to give a formal peer review. If the change is well received and approved it will be implemented into the node software and then node operators, exchanges, and other community members must be convinced to update the software. As long as the majority of the community finds it reasonable the network will be updated and the new rules or functionality are put into place.

So How Has Decentralization Played Out for Bitcoin?

As of writing this article there are just shy of 9000 full nodes supporting the Bitcoin network. 25.30% of these nodes reside in the U.S, 20.76% in Germany, and the remaining 53.94% are dispersed across the rest of the world, largely in Europe and Asia as can be seen below.

Image from https://bitnodes.earn.com/

It is worth noting that some, like Bitcoin Core Developer Luke Dashjr, speculate the true number of nodes on the network to be closer to 100,000. This estimate allegedly accounts for all nodes on the network and not just nodes in “listening mode” that node monitoring services use when calculating the number of nodes on the network. Regardless, this expansive, global network of nodes is what makes Bitcoin generally regarded as the most secure distributed network. If an individual wanted to undermine the network or rewrite the transaction history, he or she would have to have enough computing power to simultaneously rewrite the ledger on a majority of the nodes, which is nearly impossible (what about quantum computing?). 

Where Bitcoin is more centralized, however, is in the concentration of the mining power on the network. As of the writing of this article, the top 4 mining pools (AntPool, BTC.com, Poolin, & F2Pool) control 59.8% of the hashing power on the network, as can be viewed below. Mining pools exist to pool hashing power into groups, increasing the likelihood of receiving a Bitcoin block reward. With the amount of hashpower on the Bitcoin network today, it is practically impossible for an individual to mine Bitcoin on their own.

Image from https://www.blockchain.com/pools

All of the aforementioned pools operate in China which means that those managing the majority of hashing power on the network are highly consolidated within one region. Some view this as problematic, believing it makes it easier for a 51% attack to occur. With so few parties managing a disproportionate amount of the hashing power in the same geographic region, it would be easier for them to collude and align interests if they decided to try such an attack. However, since individual miners are free to join whichever pools they please, so if they disagree with how the pool is operating they can join a new pool. Additionally, nodes on the network can fork the network if they disagree with how the mining pools are controlling the transaction history. In addition to the majority of hashing power passing through China, the majority of the Bitcoin mining industry is powered by hardware built by Chinese manufacturer Bitmain. Bitmain is projected to have somewhere around 65% market share of the bitcoin mining hardware industry and operates both Antpool and BTC.com. Accordingly, the majority of hash power on the network is touched by Bitmain in some way, placing a considerable amount of power in their hands.

Many are concerned about the future of Bitcoin if mining power is to remain centrally concentrated within the hands of a few mining pools. If four mining pools control most of the hashing power, don’t they essentially control the future of the network? I can’t say for certain how Bitcoin will evolve in the future, but there is evidence from Bitcoin’s past to suggest that miners will not dictate how the network changes. In November of 2017, Bitcoin was slated to undergo a massive hard fork known as SegWit2x that was designed to upgrade its block size limit from 1MB to 2MB. The true motives for the update remain a heatedly debated topic. Allegedly, the intent of the upgrade was to overcome the scalability problems associated with Bitcoin and allow faster payments. However, many believed that it was a move by miners and large Bitcoin operations to subvert the network and profit by collecting more fees and selling more expensive equipment. Prior to the fork the majority of miners signaled supported the update, but there was no support from Bitcoin Core Developers and little from the community broadly. The Bitcoin community pushed for UASF (User Activated Soft Fork) which was designed to activate SegWit (Segregated Witness) without the condition of SegWit2x that block size must increase. With such low support from the community, SegWit2x failed and UASF (or BIP-148) passed – a major victory for node operators and currency hodlers.

Only time will tell if the approach Satoshi took will prove the most optimal in the long run, but so far his assumptions have been well supported. The network has been self-regulating and the incentives have been so well aligned that it has continued to gain adoption and use for over a decade. In the next article, I will explore how another veteran protocol, Factom, has approached decentralization and draw on some of the lessons to be learned. Until then, Happy New Year!