How does ZKRush DevOps Empower Aleo Mining

中文

ZKRush, a team with extensive expertise in mining pool building and operations, has developed a targeted cluster management operation system to boost mining efficiency, efficiently regulate costs, and minimize accident rates. It is expected that there will be a big influx of former Ethereum miners for the upcoming Phase 2 of Aleo Testnet3, making competition exceedingly heated. The DevOps crew behind the scenes is also being put to the test at this time. This blog will demonstrate how our team employs numerous solutions to handle certain practical difficulties in Aleo mining from a DevOps viewpoint, based on our engagement and understanding of Aleo.

Containerization

Using containerization technology in the deployment and management of services has obvious advantages over traditional service deployment methods. Consider how difficult it would be to have unified management if you had tens of thousands of various types and configurations of miners. You may reduce the differences in this section by using containerized deployment. In this situation, Kubernetes is more than capable of handling mining activities. We have also performed secondary development of the platform, adding additional automation, CMDB configuration management features, and Aleo configuration optimization that is more appropriate for mining, based on the container cluster management integration platform.

Network

A competent operations specialist may find it easy to manage a Kubernetes cluster. However, if our miners are deployed across multiple geographies and IDCs, we will need to employ more complex network modules to facilitate cross-territory miner integration. It is not straightforward to set together a cluster of servers from several locations. We employ SDN in conjunction with an SDN orchestration management platform on top of the cluster to connect servers across several IDCs and construct a virtual network depending on business requirements. In this manner, the Kubernetes cluster may be scheduled amongst servers in various IDCs and programs can be deployed.

CMDB

The functionality of the CMDB is one of the most essential aspects of our secondary development of the container management platform. Given the circumstances, it is possible that the miners in our cluster are not all running the same program with the same configuration. Since the Merge, it is quite likely that some of them will switch to BTC, while the remainder will need to look for alternatives like as Aleo, Iron Fish, etc. The team must identify between hardware for Aleo and hardware for ETH or other projects based on the real differences between them. On this basis, computational resources can be given to different miners and clients, increasing the complexity of operation and maintenance. All of these can be easily done in CMDB. Operating and maintenance personals just need to input the miners’ basic information, use, and so on, and then combine the container package management platform and certain secondary development automation capabilities to execute programs quickly.

We’ve compiled responses to some frequently asked queries, as well as some real-world situations, below.

How to Manage Different Server Configurations?

We have a vast pool of resources now that miners have joined our clusters. Following that, we will assign these miners two types of labels based on configurations and projects. You can select to launch the matching service on the server with a certain label during the program deployment phase. As a result, Kubernetes can automatically schedule between these servers. Users only have to calculate the ultimate output and the related resources needed. On this basis, the amount of CPU cores and GPUs that may be used can also be scheduled. You can configure everything before the application starts if there is a need for a strictly controlled output

How to Add Monitoring to the Servers?

With thousands of miners, ensuring that the servers are working smoothly is critical to the eventual income. The container management platform includes a monitoring system that, when paired with Prometheus and Grafana, can effectively ensure the fundamental functionality of the servers at the system and hardware levels. You can also add more unique business monitoring metrics on top of system monitoring by utilizing controllers like Kubernetes’ DaemonSet/Job. For example, in Aleo Testnet 2, we tracked the number of proofs submitted by Aleo miners as well as the time spent on each round of tasks. On other words, we gathered some chain data from Aleo, evaluated it, and sent alerts as appropriate in a third-party platform, allowing us to perceive business developments more promptly.

What are Our Advantages and Solutions in a Multi-IDC Scenario across Geography?

This part involves the SDN solution mentioned earlier. Based on this network architecture and combined with Kubernetes clusters, we incorporate all the regional devices into a single cluster for unified management, which greatly reduces the workload of operation and maintenance. Given the extremely complex configuration process of SDN, we have introduced an adapted network management platform that can be added to a custom network with a single command, a process that is also directly covered in the server initialization phase, so it can be done without any manual operation. In addition, in order not to incur additional network overhead, we also propose to deploy a container repository in each regional intranet for servers in each IDC to download images. For this part of the vision, we also implemented in the secondary development of the container management platform which can be configured to automatically select multiple repository addresses and open the replication function between repositories. More importantly, once deployed, it will be able to find the optimal repository basically without any management involvement.

A mature and experienced professional team can ensure mining efficiency and pool stability to a significant extent, which is an important reference for miners when deciding between platforms. The sections above are excerpts from our team’s DevOps approach for ZKRush. We also believe that by sharing our technological knowledge, we will provide everyone with a more comprehensive grasp of ZKRush.

Link copied!