Web3’s scalability problem
Cudos is building a highly scalable network that achieves security, decentralisation and speed. Read to learn more about Web3’s scalability dilemma.
5 min read
The UST (Terra/Luna) crisis rocking the crypto-verse has dominated this week’s chatter. As the episode unfolds, we’re reminded of the fundamental challenges hindering widespread blockchain adoption. While it’s essential to realise that the industry remains in its infancy, it’s also worth highlighting that many expectations are long overdue and mainstream adoption remains a long way away. Our ability or inability to assess these “failures” frankly and critically may set the stage for eventual success or disappointment.
We’ll address some fundamental challenges facing the Web3 space and discuss a few proposed solutions in several posts. First, we begin with an issue that directly relates to adoption itself: scalability.
What is scalability, and why is it essential for Web3 projects?
Scalability refers to the ability of a network to handle a large number of transactions per second (TPS) without compromising its security or effectiveness. Since the creation of Bitcoin, the question of transforming peer-peer networks into a globally usable financial/gaming/cloud technology has remained unanswered. The blockchain trilemma suggests networks must prioritise two of three fundamental attributes: speed, decentralisation, and security. A network’s ability to achieve two of these inevitably affects its ability to guarantee the third. Because transaction blocks on a blockchain are limited in time and frequency, there is a finite and defined amount of TPS that can be executed on a network at any given time.
The Bitcoin network, for example, maxes out at 7 TPS. In comparison, VISA averages 1700 TPS, significantly dwarfing both Bitcoin and Ethereum’s 12TPS. The issue definitively precludes the possibility of widespread adoption because as more people join the networks and the number of transactions increases, so do network congestion and gas fees in proof-of-work (PoW) architectures. Congestion may often lead to significant network strain and pose security challenges.
The most widely accepted solution to the scalability issue is using secondary networks built on layer one (L1) blockchains to scale effectively without having to alter the original codes. These secondary networks are generally referred to as layer twos (L2s). Examples of L2s include Lightning network (Bitcoin) and Polygon (Ethereum).
Can L2s solve the scalability problem?
Addressing scalability remains a challenge for blockchain, even though there are reasons for optimism with the development of L2 solutions. However, it is important to highlight two key components of the scalability problem: transactional efficiency and storage. While discussions about scalability have mainly focused on the former, the latter problem continues to be largely ignored. Most blockchains have a redundancy problem. Typically every device/node connected to the network retains the network’s transaction history, potentially a highly demanding storage requirement for networks striving to achieve global utility and usage.
For the most part, L2s have solved the efficiency problem by improving transactional throughput on L1s, but the storage issue remains largely unaddressed. We’ve highlighted in a previous post why blockchains are increasingly reliant on centralised off-chain storage. Because networks lack the infrastructure to handle storage needs, IBM recently argued that off-chain storage is inevitable. But storing blockchain data off-chain raises both security and decentralisation questions.
How is Cudos approaching the issue?
Cudos comprises an L1 network and an L2 compute solution. The network’s unique architecture combines blockchain security with decentralised cloud computing and storage. Cudos, based on the Cosmos SDK, offers a robust developmental foundation that favours security and speed whilst providing sovereign networks with the ability to modify and adapt their architectures depending on need. The compute layer will decentralise data storage and computation for Cudos and potentially other networks.
There are obvious advantages to building decentralised networks on decentralised infrastructures. Increasingly, blockchains are requiring off-chain storage capacities, with the emergence of NFTs and other digital assets requiring larger storage accelerating this trend. Sacrificing decentralisation for scalability has attracted criticism over the inherent risks associated with the practice. Implementing innovative solutions to the problem requires all hands on deck. Our testnet successes so far are owed to our vibrant community of developers, Validators and enthusiasts. You also can play a role in achieving scalability without sacrificing decentralisation.
Join our ongoing Testnet!
Developers are invited to join our incentivised Testnet, which remains open to participation. You can earn rewards for completing outstanding tasks. We are in the fourth and final phase of the Testnet, with the highly anticipated Mainnet launch set for release next month. Attempt the tasks here.
- Join the Cudos’ Discord server
- Join the Cudos’ Telegram community
- Buy CUDOS tokens
- Become a Cudos’ ambassador.
About Cudo Compute
Cudo Compute is a fairer cloud computing platform for everyone. It provides access to distributed resources by leveraging underutilised computing globally on idle data centre hardware. It allows users to deploy virtual machines on the world’s first democratised cloud platform, finding the optimal resources in the ideal location at the best price.
Cudo Compute aims to democratise the public cloud by delivering a more sustainable economic, environmental, and societal model for computing by empowering businesses and individuals to monetise unused resources.
Our platform allows organisations and developers to deploy, run and scale based on demands without the constraints of centralised cloud environments. As a result, we realise significant availability, proximity and cost benefits for customers by simplifying their access to a broader pool of high-powered computing and distributed resources at the edge.