Blockchain Scalability: Execution, Storage, and Consensus

Definition
DEFINITION

Blockchain scalability is the ability of a blockchain to process transactions, store data, and reach consensus as additional users are added to the network.

Trust minimization is a valuable security property that blockchain technology is uniquely positioned to generate—replacing handshakes, brand reputation, and paper contracts with guarantees based on computer code, cryptography, and decentralized consensus. These superior guarantees provided by blockchains form the basis of cryptographic truth.

 Cryptographic truth bring trust minimization to backend computing of applications and record keeping.
Cryptographic truth bring trust minimization to backend computing of applications and record keeping.

Blockchains have succeeded in bringing trust minimization to new use cases including monetary policy (e.g. Bitcoin) and digital asset trading (e.g. DEXs). However, blockchains have historically struggled to maintain trust minimization for use cases that require speeds and costs comparable to traditional computing systems. These scalability limitations can be felt by users in the form of high transaction costs and cause developers to doubt whether blockchains can support high-value use cases that hinge on handling data in real time.

With the ultimate goal of unlocking blockchain technology for all users and use cases, scalability is at the forefront of blockchain research and development as a key element in smart contracts becoming the preferred backend of major industries such as finance, supply chain, gaming, and beyond. The following post provides an overview of blockchain scalability, focusing on how blockchains differ from traditional computing before outlining the advantages and tradeoffs of different approaches to scaling the execution, storage, and consensus layers of blockchains.

***Note: This blog is not an exhaustive list of every approach and aspect of blockchain scalability, as solutions are constantly being researched, tested, deployed, and updated due to the cutting-edge nature of blockchain research and development.

Blockchains vs. Traditional Computing

Before discussing how to scale blockchains, it’s important to first understand why blockchain computing is fundamentally different from traditional computing. In general, blockchains are valuable for three reasons:

  • Deterministic computation—predefined coded logic executes exactly as written with a very high level of certainty.
  • Credible neutrality—there is no central administrator and no special network privileges, meaning anyone can submit transactions without fear of censorship or discrimination.
  • End-user verification—the historical and current state of the blockchain’s ledger and the code underpinning the client software are auditable by anyone in the world.

At a more technical level, blockchains are tasked with managing an internal ledger of data, which can represent asset ownership, contract state, or simply raw information. Most blockchain networks are managed by two overlapping yet distinct sets of participants: block producers and full nodes.

Block producers gather unconfirmed transactions submitted by users, check their validity, and place them into data structures called blocks. Block producers are generally referred to as miners in Proof-of-Work (PoW) chains or validators in Proof-of-Stake (PoS) chains, with PoW and PoS acting as Sybil-resistance mechanisms to ensure that blockchain ledgers remain live and immune to censorship.

The blocks submitted by block producers are then accepted or rejected by full nodes—entities that independently store a full copy of the chain’s ledger and continually validate new blocks but are not required to participate in block production. Full nodes are run by most block producers but also include end-users and key economic actors such as exchanges, RPC providers, and stablecoin issuers. Ultimately, full nodes keep block producers honest by rejecting invalid blocks, even in a situation where the majority of block producers are malicious. This makes the creation of invalid blocks a waste of time and money, assuming a sufficient number of honest full nodes exist.

Users leverage full nodes to submit transaction to blockchains while miners/validators submit blocks to full nodes for validation.
Users leverage full nodes to submit transaction to blockchains while miners/validators submit blocks to full nodes for validation.

Furthermore, the separation of full nodes and block producers prevents miners/validators from corrupting the blockchain by arbitrarily changing the protocol’s rules. This is part of a checks and balances system where block producers only have the power to order transactions but do not dictate the rules of the blockchain. The rules are governed by the full node community, which ideally anyone can easily participate in. To obtain a deeper understanding of the underlying architecture of blockchains, check out Cryptographic Truth: The Future of Trust-Minimized Computing and Record-Keeping.

Reducing hardware requirements is crucial to lowering the barrier to entry for running a full node, which is historically how blockchains have remained decentralized—a critical component to generating trust minimization. However, decentralization is also the property that generally makes blockchains slow, since the network can for the most part only operate at the speed of the slowest node. This dynamic is described by the “blockchain trilemma”—also called the scalability trilemma—which states that traditional blockchains can only maximize two out of the three properties: scalability, decentralization, and security.

The blockchain trilemma showcases the tradeoff that has to be made by traditional blockchains when attempting to maximize scalability, security, and decentralization.
The blockchain trilemma showcases the tradeoff that has to be made by traditional blockchains when attempting to maximize scalability, security, and decentralization.

One limitation of traditional blockchain models is that achieving scalability usually requires sacrificing decentralization, security, or some degree of both. For instance, a scalable and decentralized network will need to incentivize a large number of active participants to achieve high security. A scalable and secure network will generally raise the cost of running a node at the expense of decentralization. Furthermore, decentralized and secure networks keep node requirements low and the cost of attacks high but end up with scalability bottlenecks.

Unlike blockchains, traditional computing environments do not have to worry about decentralization since maximizing trust minimization is not their primary objective. This is why traditional computing networks are generally centralized and operated by for-profit companies, naturally reducing their cost and increasing their speed since the network is managed by a single entity that doesn’t have to design around making its computation independently verifiable by end-users.

As a result, the trust model of traditional computation environments is based on brand and legal contracts. In contrast, blockchain trust models rely on cryptography and game theory, offering independent verifiability and often supporting direct user participation. The trust model of traditional computing environments is simply not compatible with blockchain networks since they are subject to chokepoints such as external influence, single points of failure and control, and processes that can’t be audited by users.

These dynamics get at the essence of blockchain scalability: How do blockchains achieve the speed and costs of traditional computing environments while still maintaining strong trust-minimization properties of security and decentralization?

Three Key Properties of Blockchain Scaling

Blockchain scaling can be broken down into three general categories: execution, storage, and consensus. Below, we define each property and look at the core problem it seeks to solve. In practice, scaling one property is often dependent on or results in the scaling of one or two other properties.

Blockchain Execution

Blockchain execution is the computation required to execute transactions and perform state changes. Transaction execution involves checking the validity of transactions (e.g. verifying signatures and token balances) and executing the on-chain logic needed to calculate state changes. State changes are when full nodes update their copy of the ledger to reflect new token transfers, smart contract code updates, and data storage.

The scalability of blockchain execution is commonly thought of in terms of transactions per second (TPS), but on a more general level, it refers to the number of computations per second since transactions can vary in complexity and cost. The more transactions that flow through a network, the more computations that need execution at any given time.

When scaling the execution layer, the main problem to solve is how to achieve more computations per second without substantially increasing the hardware requirements on individual full nodes that validate the transactions in blocks.

Blockchain Storage

Blockchain storage refers to the storage requirements of full nodes, which maintain and store a copy of the ledger. Blockchains have two general forms of storage:

  • Historical data encompasses all the raw transactions and block data. Transaction data includes the origin and destination addresses, the amount sent, and the signature of every individual transaction. Block data includes the list of transactions and metadata from a specific block, such as its Merkle root, nonce, previous block hash, etc. Historical data doesn’t typically require quick access, and there only needs to be at least one honest entity making it available for download.
  • Global state is a snapshot of all the data that smart contracts can read from or write to, such as account balances and the variables within all smart contracts. Global state can be generally thought of as the database of a blockchain, which is required to validate incoming transactions. State is commonly stored within tree structures (e.g. Merkle trees) where access and modifications can be easily and quickly made by a full node.

Full nodes need access to historical data in order to sync to the blockchain for the first time and global state in order to validate new blocks and execute new state changes. As the ledger and associated storage grow, computation of state becomes slower and more expensive because nodes require more time and computations to read from and write to state. If a node’s memory storage becomes full, it will need to use disk space storage, which further slows down computation since nodes need to swap between storage environments during execution.

Blockchains with increasing storage requirements often experience state bloat—a situation in which, without hardware upgrades, it becomes harder for full nodes to stay synced to the current version of the ledger (i.e. the chain tip) and for users to sync up new full nodes. Some factors that may affect whether a blockchain experiences state bloat include the historical length of the ledger, the frequency of new block additions, the max size per block, and the amount of data that must be stored on-chain to verify transactions and execute state changes.

When scaling the storage layer, the main problem to solve is how to allow blockchains to process and validate more data without increasing storage requirements for full nodes; i.e., where can data be stored long term without major changes to the trust assumptions of blockchains?

Blockchain Consensus

Blockchain consensus is the method by which nodes in a decentralized network reach an agreement on the current state of the blockchain. Consensus is mostly concerned with achieving an honest majority in the face of a certain threshold of malicious actors and reaching finality; i.e., transactions are accurately processed and highly unlikely to ever be reversed. Blockchain consensus is generally designed around minimizing communication overhead in order to increase the upper bound on decentralization for stronger Byzantine fault tolerance and lower the time to finality for faster settlement.

When scaling the consensus layer, the main problem to solve is how to reach finality faster, cheaper, and with more trust minimization—all in a predictable, stable, and accurate manner.

Scaling the Execution Layer

Below are five different approaches currently being taken to scale the execution layer of blockchains along with the advantages and tradeoffs of each. In practice, some of these approaches are combined for even greater execution capacity.

Vertical Scaling of Validator Hardware Requirements

Blockchain execution can be scaled by increasing the hardware requirements for block producers. Higher hardware requirements lead to each validator being able to perform more computations per second.

Advantages: Having a single decentralized network made up of high-computing-capacity validators leads to blockchains that can support larger blocks, faster block times, and lower transaction costs while still maintaining on-chain composability between smart contracts and potentially higher trust minimization than traditional computing models. Such blockchains can be particularly useful for high-frequency trading, gaming, and other latency-sensitive use cases.

Tradeoffs: Vertical scaling of validators will limit network decentralization given the higher cost of running a validator or full node. Node costs will often increase over time, making it hard for most users to participate. Remaining decentralized will become dependent on Moore’s law, which states that the number of transistors on a microchip doubles around every two years while the cost of computers halves. Higher full node costs can also increase the costs for end-users who want to directly verify activity happening on-chain, lowering trust minimization.

Horizontal Scaling via Multi-Chain Ecosystems

An alternative to vertical scaling is horizontal scaling through the use of multiple independent blockchains or sidechains within a single ecosystem. Horizontal scaling spreads the computation of transactions in an ecosystem across many independent blockchains, with each chain having its own block producers and execution capacity.

Advantages: Multi-chain ecosystems enable the execution layer of each individual chain to have fully customizable features such as node hardware requirements, privacy features, gas token usage, virtual machine (VM) choice, permission settings, and more. This design is why multi-chain ecosystems sometimes result in dApp chains, where individual blockchains specialize in supporting individual dApps or small collections of dApps. Self-sovereign blockchains can also help isolate security risks, with one chain’s design choice for security not always affecting other chains in the ecosystem.

Tradeoffs: Multi-chain ecosystems require each blockchain to bootstrap its own security through a native token that’s issued in an inflationary manner. Though this is standard in the early growth stages of blockchains, it may prove difficult to move towards a less dilutive, more sustainable economic model based on on-chain user fees since user fees will be spread across many independent blockchains. There are also composability challenges since dApps and tokens that want to interoperate don’t always exist on the same blockchain.

Horizontal Scaling via Execution Sharding

A similar yet unique approach to multi-chain scaling is having a single blockchain that supports parallel execution across many different shards. Each shard essentially acts as its own blockchain, meaning many blockchains can execute in parallel. There is also a single main chain that has the sole purpose of keeping all shards synced together.

In execution sharding, there is one pool of validators that is split up across shards to execute transactions. Nodes are randomly and regularly rotated so they don’t always execute/validate the same shard, with the number of shards configured to make the risk of corrupting any single shard statistically insignificant.

Advantages: All execution shards pull from the same pool of nodes, so there’s no need to bootstrap security on new shards. Assuming there is a large pool of nodes, every execution environment can achieve the same level of security. Execution sharding also doesn’t require raising the hardware requirements for nodes, as nodes only perform execution on one shard at a time. Shards can also operate with the same VM or use different configurations to meet the unique requirements of certain use cases.

Tradeoffs: Each shard is limited in flexibility given that all nodes must be able to support the computation of every shard. There is also generally a limit to the number of shards one blockchain can support due to the increasing computation requirements put on the main chain and the risk of having too few nodes per shard. Furthermore, there are frictions when it comes to load balancing as well as implementation risk given that shared security models mean that all shards may be subject to the same vulnerability.

Multi-chain ecosystems generally do not share security across blockchains while execution sharding distributes security across shards from one pool of node operators.
Multi-chain ecosystems generally do not share security across blockchains while execution sharding distributes security across shards from one pool of node operators.

Horizontal Scaling via Modularity

Another approach to horizontal scaling is modular blockchains, where the architecture of blockchains is separated into multiple different layers; i.e., isolating the execution, data availability (DA), and consensus components. The most popular way to perform execution in modular blockchain implementations is via rollups, which move the computation and state off-chain into off-chain networks while storing transaction data on-chain. State changes computed off-chain are then proven on-chain proactively as valid using zero-knowledge proofs (zk-rollups) or invalid retroactively using fraud proofs (optimistic rollups).

Advantages: Modular blockchains offload transaction execution and state to a cheaper, leaner, and higher-throughput computing environment while still inheriting the security of the underlying blockchain used for settlement. This is because the consensus process, in which the validity of off-chain computation performed by the execution layer is verified, is carried out by an existing decentralized baselayer (i.e. L1) blockchain. Intuitively, this means the computational bandwidth of a baselayer blockchain can be used more efficiently because full nodes don’t need to execute every transaction. Full nodes just need to verify succinct proofs and store a small amount of transaction data.  

Rollups can also support escape hatches for trust minimization; i.e., if a rollup network is not working properly, users can withdraw their crypto and submit it to the baselayer blockchain. Many modular networks can also amortize user costs; i.e., there are fixed costs for verifying the proof of a zk-rollup on the baselayer blockchain, meaning consensus costs can be reduced as usage increases since they are shared amongst a larger number of users. Furthermore, rollups have a 1-of-n trust model—only one honest node is required to ensure the correctness and liveness of the computation.

Tradeoffs: Modular blockchains may not be as fast or as cheap as sidechains or standalone chains since most approaches leverage the use of a baselayer blockchain’s limited and sometimes expensive block space for security. Current approaches to modular networks also commonly carry upgradability risks that require governance intervention (outside immutable enshrined rollups) and may result in liquidity fragmentation and composability challenges if some dApps remain on a baselayer blockchain while others run across different off-chain execution layers. Finally, implementing a rollup or other modular blockchain designs is a newer and more complex process than launching a new standalone blockchain.

A proposed way to scale Ethereum is modular blockchains, separating the execution, data availability, and consensus layers
A proposed way to scale Ethereum is modular blockchains, separating the execution, data availability, and consensus layers (source).

Payment and State Channels

Payment and state channels can be used for blockchain scaling by allowing users to lock cryptocurrency into a multisig smart contract with other parties and then exchange signed messages off-chain representing a transfer of asset ownership and/or change of state without making any on-chain transactions. Users only need to make on-chain transactions when opening a channel and closing a channel.

The multisig contract is used to ensure the correct settlement of the channel by having users cryptographically sign each interaction, with each signature accompanied by a nonce so the smart contract can verify the correct order of transactions.

Advantages: Payment and state channels allow transfers of cryptocurrency to happen in real-time for zero cost and near-instant latency. Payment channels make micropayments feasible, which are often not possible on a baselayer blockchain. They also allow the cryptocurrency locked in the channel to be settled swiftly on-chain if both parties cooperate.

Tradeoffs: State/payment channels require each participating party of a channel to be connected to the Internet to ensure their counterparties are not trying to use old messages to settle the channel on-chain. This often necessitates the use of watchtowers to continually monitor the channel and protect user funds. Payment channels also need to be pre-funded with liquidity, which can make large payments difficult and result in capital inefficiency.

Efficiently routing payments across a network of channels is a difficult problem that can result in failed transfers or the creation of a more centralized hub-and-spoke model to ensure participants have access to sufficient liquidity and short routes. Generally, state/payment channels work best between a known set of static participants but don’t work well with a dynamic or unbounded set of participants. There is also the ownership problem, where it’s difficult or often impossible for channels to represent objects that do not have a clear logical owner (e.g. DEX liquidity pool).

Scaling Data Storage

Below are six different approaches currently being taken to scale the storage layer of blockchains. In practice, some of these approaches are combined for even greater storage improvements.

Vertical Scaling of Blockchain Nodes

Similar to vertical scaling of blockchain execution, vertical scaling of blockchain storage involves raising the hardware requirements of running a full node.

Advantages: Blockchains with higher storage limits for full nodes can offer a large volume of cheap storage; i.e., full nodes can store more historical data and larger amounts of state. Direct full node storage enables easier access to on-chain data given that there are no additional storage layers or external dependencies.

Tradeoffs: Since there is more and more data to store over time, the decentralization of the blockchain becomes increasingly at risk as the costs of running a full node increase. With less decentralization, fewer trust-minimized assurances can be provided to users that data will be available and correct. State bloat can also lead to slower execution of blocks over time, increasing the strain on the network as a whole.

Data Sharding on Layer-1 Blockchains

Another approach to scaling the data storage of blockchains is data sharding. Data sharding splits the storage of the ledger and/or the data used to recreate the ledger across many shards, reducing an individual node’s storage requirements at any given time to that of a single shard or small set of shards.

Advantages: Data sharding allows blockchains to increase their capacity to store data cheaply without increasing the hardware requirements for individual nodes. Such an approach is beneficial for maintaining decentralization since it increases the ability of users to run their own nodes. Data sharding also provides greater storage capacity for rollups that store transaction data on baselayer blockchains—a requirement to rebuild the rollup’s state. Moreover, approaches such as Danksharding allow for a merged fee market for better load-balancing and inclusion of data.  

Tradeoffs: There may be limits on the number of shards one blockchain can support due to the increased load on the main chain. There is also a need for data availability sampling (DAS), which proves that historical data needed to reconstruct part of the ledger was available at one point (i.e. when the block was produced) without nodes actually having to download all the data themselves. Additionally, data sharding requires communication overhead to pass storage between nodes when rotating nodes to different shards. It also requires a large number of nodes to maintain high security—there must be a certain level of decentralization per shard, so the total pool of nodes needs to be large since it’s split out amongst all shards.

Compressed On-Chain Data Storage With Modular Blockchains

Modular blockchains perform computation off-chain and then store transaction data or state differences either on-chain or off-chain. The data allows other nodes or users to rebuild the current or historical state of the ledger. When rollups employ on-chain data storage, transaction data is often compressed off-chain prior to being stored on-chain.

Advantages: Compressed on-chain data storage is the most secure form of data storage for modular blockchains because data is stored by all full nodes on the network. It also reduces the cost of storing data on the layer-1 blockchain. When combined with data sharding, rollups are provided access to a more efficient and cheaper on-chain storage environment for transaction data that scales better with increased usage.

Tradeoffs: On-chain storage availability is more expensive than off-chain storage, which may inhibit the ability of modular blockchains to match the scalability of less decentralized storage options. Compressing data may also drop parts of the data that are not strictly required for validation, potentially inhibiting a more granular analysis of chain activity based on that data.

Off-Chain Data Storage in Modular Blockchain Designs

Modular blockchains can store transaction data off-chain to further reduce on-chain storage requirements. This includes “validiums,” which publish zero-knowledge proofs on-chain while storing data off-chain. There are four main approaches to off-chain data storage by modular blockchains:

  • Centralized storage consists of off-chain storage on a centralized platform. While it’s the cheapest way to store data, it can be subject to data withholding and security issues such as the centralized storage platform modifying data or going offline.
  • Permissioned DACs store data off-chain but provide on-chain attestations of that data being published correctly using a signature scheme from a small committee of trusted nodes, referred to as a data availability committee (DAC). The advantages and tradeoffs are similar to centralized storage solutions but with slightly better trust assumptions on availability.
  • Permissionless DACs store data off-chain but provide on-chain proofs using permissionless DACs with cryptoeconomic incentives to act honestly. Permissionless DACs are cheaper than on-chain storage solutions while being more secure than other off-chain solutions. The tradeoffs are that this is still less secure than on-chain storage and has yet to be achieved in production at scale with sustainable economics.
  • Volitions enable users to choose whether they want to store their transaction data on-chain or off-chain. Volitions are novel because they enable data availability solution options at the individual transaction level while allowing all transactions to share the same state root and consensus cost. However, this method is more complex than the others listed above and has yet to be achieved in production.

Data Pruning

Data pruning is a technique that enables blockchain full nodes to discard historical data beyond a specific block height. Data pruning is often paired with Proof-of-Stake checkpoints, where the transactions in blocks beyond the checkpoint are considered final; i.e. they can’t be reversed without major social consensus or a hard fork.

Advantages: Data pruning reduces the amount of data that a node needs to store or reference when participating in consensus—the ledger is smaller since historical data is already validated so it is safe to be pruned. Because the historical data has already been validated, it is no longer needed if the intent of operating a full node is just to validate future blocks as opposed to also offering historical look-backs.

Tradeoffs: Data pruning relies on third parties (e.g. exchanges, block explorers, etc) to store historical data permanently in order to rebuild state back to the genesis block. However, it’s a 1-of-n trust model, so only one third party needs to store the data honestly in order for a full node to be able to recreate all historical state. With Proof-of-Stake offering checkpoints and weak subjectivity, this assumption becomes less relevant. However, such data is still important for on-chain analytics and block explorers.

Statelessness, State Expiry, and State Rent

There also exist methods focused around limiting the amount of state that full nodes have to store, particularly through state expiry, statelessness, or state rent implementations.

  • State expiry designs allow nodes to prune state that hasn’t been accessed in a certain amount of time, yet utilize a type of merkle proof (called “witnesses”) to revive expired state if needed.
  • Statelessness designs are where full nodes are not required to store state. Full nodes only need to validate new blocks with the inclusion of witnesses. Weak statelessness is when only block producers are required to store global state while all other nodes can verify blocks without storing state.
  • State rent designs require that users pay to maintain limited state storage. State that is no longer being paid for is recycled and rented out to new users.

Advantages: Methods for limiting state storage requirements ultimately help cap the amount of state that individual nodes have to store. This helps alleviate state bloat, even amidst a growing ledger or increasing number of on-chain transactions. Limiting state storage is crucial for maintaining long-term end-user verification while still maintaining practical hardware requirements.

Tradeoffs: Limiting state storage is a fairly novel approach and eliminates the idea of users paying a single time to have every single full node in the network store their state in perpetuity forever—a stark contrast to how blockchains handle state today. Furthermore, upgrading a blockchain that uses a traditional state storage model to a more limited state storage model is difficult and may break applications that made specific assumptions during development about state always being accessible. New state storage models may also make particular applications more expensive than they were previously.

Scaling Consensus

Below are four general goals when trying to scale blockchain consensus mechanisms as they pertain to more frequent block times, faster finality, and enhanced robustness against downtime or malicious attacks. Note that scaling consensus is not just about speed but also accuracy, stability, and security.

Increase Execution and Storage Capacity

A foundational component in scaling a blockchain’s consensus mechanism is increasing its computational and storage capacity without substantially raising the hardware requirements for full nodes. This will allow more nodes to participate in consensus or at least prevent existing nodes from dropping off the network as the ledger grows—helping maintain strong consensus guarantees around uptime, censorship resistance, accuracy, and security. If execution and storage capacity is raised to a significant level without meaningful impact on full nodes, blockchains may even be able to support faster block times and/or larger block sizes in a stable manner without sacrificing their core property of decentralization.

Reduce Networking Bandwidth

Another way to approach scaling a blockchain’s consensus mechanism is to reduce networking bandwidth; i.e. the communication overhead (sending and receiving messages) required between full nodes in order to reach consensus. Instead of requiring that nodes be able to communicate between all other nodes (i.e. all-to-all voting), blockchains consensus can be designed so that nodes only need to communicate with a small portion of other nodes at any moment in time (e.g. sub-sampling). Some consensus designs do not use multi-round voting or communication schemes so the only communication required is the propagation of blocks, but this generally comes at the expense of probabilistic finality.

Increase Network Latency

There are also methods focused on trying to reduce network latency during consensus, particularly as it relates to lowering the time to finality. Some blockchain consensus mechanisms have instant finality either through multi-round sub-sampling or all-to-all voting rounds. Other blockchains implement checkpoints secured by a supermajority consensus of validators after a period of time, meaning blocks are considered final past the checkpoint since there can no longer be in-protocol re-orgs beyond it. Often a tradeoff between network latency and network bandwidth has to be made, although some hybrid approaches have been optimized for both.

Increase the Security Budget

The trust minimization of consensus can also be scaled by increasing the security budget that funds nodes participating in consensus. This is generally done by achieving a monetary premium, having inflationary token rewards, and/or growing transaction fee revenue because demand for block space exceeds supply. Higher security budgets open up more potential revenue for participants, which may then increase the network’s decentralization since more nodes are incentivized to join. Blockchains can also require nodes to put up more stake or computational power to participate in consensus, although this risks increasing the centralization of the network if requirements become too high.

A Scalable and Secure Cross-Chain Future

Blockchain scalability is at an exciting point in its development as demonstrated by the plethora of solutions being built, tested, and launched into production. With a strong focus on scaling while preserving trust minimization, blockchains are poised to cement themselves as the go-to backend for a wide variety of industries and use cases.

In support of the expanding multi-chain ecosystem, the Cross-Chain Interoperability Protocol (CCIP) is actively being developed on top of Chainlink to enable users to securely exchange data and tokens between different blockchains based on user-defined logic. CCIP is being built with a strong focus on security, as demonstrated by the development of a Risk Management Network, to enable cross-chain smart contracts and secure token bridging in a manner that doesn’t break the trust assumptions of blockchains. For more information about CCIP, check out: Unlocking Cross-Chain Smart Contract Innovation With CCIP.

The proposed architecture of CCIP.
The proposed architecture of CCIP.

To learn more about Chainlink, visit the Chainlink website and follow the official Chainlink Twitter to keep up with the latest Chainlink news and announcements.

Learn more about blockchain technology

Get the latest Chainlink content straight to your inbox.