Humanity is producing a staggering volume of data, but the processing power of chips cannot keep pace. In order to make the most out of IoT and help tackle global challenges, we may need to rethink how our computer networks monitor and respond to data signals.
When it comes to computer data storage, it can seem like we are running out of numbers. If you are old enough, you may remember when diskette storage was measured in kilobytes in the 1980s. If you are a little younger, you are probably more familiar with the thumb drives denominated in gigabytes or hard drives which hold terabytes today.
Humanity’s unfathomable data footprint
But mankind is now producing data at a rate unprecedented in human history. As a result, we are going to need to be able to grasp numbers so large that they seem almost beyond human comprehension. In order to get a sense for the new realm we are entering into, consider this: market intelligence firm IDC estimates that the total global creation and consumption of data amounted to 59 zettabytes in 2020 — that’s 59 trillion gigabytes in old money.
Yet while the total volume of data in existence is now at an almost unfathomable scale, the rate at which it is growing is even more striking. Back in 2012, IBM calculated that 90% of the world’s data had been created in the previous two years. Since then, the exponential growth in global data volume has continued apace and the trend looks set to continue. Indeed, IDC projects that over the next three years humanity will create more data than it did during the previous three decades.
The obvious question is: what has changed? Why are we suddenly producing much more data than ever before? Of course, smartphones are part of the story. Everyone now effectively carries a mobile computer in their pocket which dwarfs the power of desktop computers of previous generations. These machines are constantly tethered to the internet and continuously receiving and transmitting data, even when idle. The average US Gen-Z adult unlocks their phone 79 times a day on average, approximately once every 13 minutes while they are awake. The always-on nature of these devices has contributed to the avalanche of new data produced, with 500 million new Tweets, 4,000 terabytes of Facebook posts and 65 billion new WhatsApp messages fired out into cyberspace every 24 hours.
Smartphones just the tip of the iceberg
Smartphones are merely the most visible manifestation of the new data reality, however. Whereas you might assume that video platforms like Netflix and Youtube constitute the lion’s share of global data, in fact, the entire consumer share amounts only to approximately 50% and this percentage is projected to gradually fall in the coming years. So what makes up the rest?
The rise of the Internet of Things (IoT) and connected devices is further expanding our global data footprint. Indeed, the fastest year-on-year growth is taking place in a category of information known as embedded and productivity data. This is information derived from sensors, connected machines and automatically generated metadata which exists behind the scenes, beyond the visibility of end users.
Take autonomous vehicles, for example, which use technologies such as cameras, sonar, LIDAR, radar and GPS to monitor the traffic environment, chart a course on the road, and avoid hazards. Intel calculates that the average autonomous vehicle using current technologies will produce 4 terabytes of data a day. To put that in perspective, a single vehicle will produce a volume of data each day equivalent to almost 3,000 people. Furthermore, it will be critically important that this data is stored securely. On the one hand, it will be useful in order to schedule service intervals and diagnose technical problems most efficiently. In addition, it could also be used as part of a decentralized system to coordinate traffic flow and minimize energy consumption in a specific city. Finally and probably most importantly in the short run, it will be essential in order to settle legal disputes in the event of injuries or accidents.
Now consider that autonomous vehicles are just a tiny part of the overall story. According to McKinsey, the percentage of businesses that use IoT technology has increased from 13% to 25% between 2014 and 2019, with the overall number of devices projected to reach 43 billion by 2023. From industrial IoT to entire smart cities, the future economy will have a hugely increased number of connected devices producing potentially highly sensitive, or even critical data.
The end in sight for Moore’s Law?
There are two factors to consider in this light and both point to the increasing utility of decentralized networks. Firstly, while we have more data than ever before to tackle global challenges like climate change, financial instability and the spread of airborne viruses like COVID, we may be approaching a hard technical boundary in terms of how much of this information can be processed by centralized computers in real time. While data volumes have exponentially grown in recent years, processing power has not increased at the same rate.
In the 1960’s Intel’s co-founder Gordon Moore coined Moore’s Law, which stated that as the number of transistors on a microchip tends to double every two years, computing power increases at a corresponding rate. But Moore himself conceded that it was nota scientific law, more of a transitory statistical observation. In 2010, he acknowledged that as transistors are now approaching the size of atoms, computer processing power will reach a hard technical limit in the coming decades. After that, more cores can be added to processors to increase speed, but this will increase the size, cost and power consumption of the device. To avoid a bottleneck, therefore, we need to find new ways of monitoring and responding to data.
The second factor to consider is cybersecurity. In an increasingly interconnected world, millions of new devices are coming on stream. The data they provide will potentially influence things like how the electricity grid is controlled, how healthcare is administered, and how traffic is managed. As a result, edge security — the security of data that resides outside of the network core — becomes paramount. This provides a complex challenge for cybersecurity experts, as the many different combinations of devices and protocols provide new attack surfaces and opportunities for man-in-the-middle intrusions.
Learning from networks in nature
So if centralized processing is too slow and insecure for the data-abundant economy to come, what is the alternative? Some experts are looking for inspiration in the natural world, arguing that we should move from a top-down to a bottom-up model of monitoring and responding to data. Take ant colonies, for example. While each individual ant has relatively modest intelligence, collectively ant colonies manage to create and maintain complex, dynamic networks of foraging trails that can connect multiple nests with transient food sources. They do this by following a few simple behaviors and responding to stimuli in their local environment, such as the pheromone trails of other ants. Over time, however, evolution has unearthed instincts and behaviors on an individual level that produce a system that is highly effective and robust on a macro level. If a trail is destroyed by wind or rain, the ants will find a new route, without any individual ant even being aware of the overall objective to maintain the network.
So what if this same logic could be applied to organizing computer networks? Similarly to ant colonies, in a blockchain network many nodes of modest processing power can combine to produce a global outcome greater than the sum of its parts. Just as instincts and behavior are crucial in nature, the rules governing how nodes interact will be crucial in determining how successful a network will be at achieving macro-level goals.
Decentralization via Geeq: from the bottom-up
Geeq’s technology strips away the complexity of running a node in a blockchain network: at Geeq, a node owner need not buy special equipment or keep close track of the changes in the rest of the blockchain space. Geeq’s blockchains are designed for ordinary individuals. Most importantly, Geeq aligns incentives for individuals to contribute to a blockchain network in simple and affordable ways that fit into their daily lives.
By keeping costs low, making tasks small and efficient, and organizing the efforts of nodes so they leave clear trails for others to follow, small networks at Geeq have a natural evolutionary path to evolve into complex, dynamic, and sustainable systems. Without these adaptations, earlier versions of decentralized networks have needed vigilance and fairly frequent interventions to survive.
As researchers in the last decade had predicted, Geeq has succeeded in harnessing new information and communications technology to create networks that behave like living organisms, ones that will automatically detect and respond to common challenges in decentralized ways. It has taken a dedication to learn from fields outside the traditional roots of blockchain to identify the initial conditions and components needed for optimal growth.
As nature evolves, so should technology. Geeq’s protocols introduce an effective next generation set of blockchain technologies. These enable individual nodes to handle enormous quantities of transactions and data messages, processed securely for later use. Because Geeq itself does not gather data, people are able to use these new kinds of blockchain services for their own benefit, as well as in their collective interest.
Now that data is generated by every smart device in the physical world, while every movement you make online continues to be tracked, collected, and monetized by intermediaries, it is more important than ever for you to have realistic alternatives to pushing your data to central databases.
Decentralized networks are uniquely suited to the challenges ahead. While a lot of research, testing and experimentation remains to be done, the fundamental robustness and utility of Geeq’s underlying technology has been demonstrated.
We believe the only way to ensure that people will be the ones to derive the benefits from the data they generate is to build decentralized systems that serve their individual needs better than the ones that exist. The time to do it is now.
A shortened version of this article appeared in CoinTelegraph, on March 14, 2021.