By: Geeq on Feb 21, 2020
Hello Geeqs, it’s been a while since our last update, we know, we have missed you too. We have been hard at work, with barely enough time for the Christmas break and then Valentine’s … sweet poems and love letters included, and that was just from the team to one another!
Without further ado, we will jump right back in for a quick recap, starting back in what feels like the last decade, October 2019. ?
We announced in our last development update that we completed the first stage of the foundation build, and, implementation of our first functional test-net which involved:-
The Basic Node Client – a major part of the foundation build allowing for transactions to be sent and received via blockchain.
The Docker TestBed – where multiple instances of the node client are deployed in containers that run as independent validating nodes in the Geeqchain network.
The Logging Server – part of the blockchain that receives reports from each of the nodes allowing the team to collect performance statistics.
Since then we have been running some comparisons and stress tests on the testbed itself, and here is what we have found, and completed so far:-
1. We have Deployed multiple instances of the current version of the node client in containers to create a validation network for a single chain.
2. Experimented with different resource allocations ranging for 512MB and 1/4 of one CPU thread of a Xenon processor to 2GB and a full thread.
3. Ran a series of tests in which a random mix of valid and invalid transactions were submitted randomly to the individual node clients.
And so far we have returned some interesting and very intriguing results:
We found that when we fed the network 400 TPS, CPU usage was between 2% & 4% of capacity, with RAM use between 4% & 7% on average. This peaked when a specific node took the role of a hub for a block and dropped significantly when the node was simply part of the network.
We pushed this to 1000 TPS and found resource usage went up but stayed around 10%. We decided not to push this further because, at 1000 TPS, the hub was reaching the limits of what home broadband could sustain. If we put nodes on cloud services and did not limit connection speed, we could get higher TPS.
1000 TPS on a single chain gives 31 billion transactions per year.
The choice to use protocol buffers as a messaging/data format, as well as the hub and spoke network architecture, makes relatively high throughput possible.
We doubt many applications will have this level of demand and, if they do, we can simply launch new instances to share the load. This is the beauty, and power of Geeq 🙂
Through all the collective tests and results we have shared above, the one issue we expected to find all along came up. A bottleneck appears due to the bandwidth that nodes are allocated and affects their ability to communicate. This physical constraint poses a significant challenge for other platforms to scale.
Geeq’s multiple chain platform solves this issue of scalability – without sacrificing security. The foundation build indicates Geeq’s design for a limitlessly scalable, secure ecosystem can function within levels available to a typical home broadband connection, which was one of the key objectives for this build.
We hope you enjoy reading about these internal milestones. While there is more to do, we have accomplished what we expected to in the foundation build. We hope these updates will give you insights into how Geeq is approaching its builds in a deliberate and systematic way. These are some very promising results and we appreciate your support!
Would you be interested in working as a code tester with us later this spring? Please let us know!