Advise from Bob Qin: Distributed computing fault-tolerant can re-aggregate task assignments, accomplishing a huge computing problem through a selection of modes as well as traditional mode, execute storage distribution in CPU and GPU through distributed computing. This is just distributed computing in the field of storage. Meanwhile, there are still many innovations around.
On 21st August 2019, the POW’ER 2019 Global Developers Conference hosted by Mars Finance was held in Beijing, China. This conference had invited 70 tech leaders from the field of blockchain, 5G, AI, cloud computing, big data, IoT, experts and scholars, as well as the head of investment and research institutions have also shared their judgements and outlooks on new technology trends and business opportunities. At the meeting, BitCherry Chief Scientist and Chairman of North American Blockchain Foundation Bob Qin had present a key speech on “Decentralized distributed computing: Distributed e-commerce network public chain innovation”.
Bob Qin also mentioned that “decentralization and centralization are concepts in a political or commercial sense. From an engineering perspective, it is a two-way gradual process. From a distributed computing process, you go from top to bottom and this makes no difference as going from bottom to top so this is not something to worried about.”
Distributed computing fault-tolerant is less likely to occur when a problem occurs on a machine. If there are multiple machines, there will always be a problem with one machine. The solution here is to re-aggregate task assignments through various modes, as well as traditional modes to accomplish a huge computational problem. Through distributed computing to distribute the storage of your CPU and GPU, such distributed computing is just in the storage meanwhile there are still innovations around.
The following is the speech:
I would like to say thanks to Mars Finance who have made it possible for me to stand here to share with all of you about my thoughts on the blockchain. From the birth of Bitcoin till today, many people talked about its applications, ICO market in 2017, 2018 supply chain, to the actual implementation in 2019. Fundamentally speaking, many people talk how to implement the project, but from the perspective of a software engineer, the whole process of Bitcoin and Ethereum exercise, is all in a scientific way consider to be computing. Lots of people have been talking about how much money can be earned through implementation and how much Ali is supported is an extension of distributed computing for us. Again, from a software engineer viewpoint, the market, financial, the transaction is all good and whatever the process is, it is transforming into a supercomputer of distributed computing.
What is a distributed system? It turns out that the computing power executed from one machine becomes one or more computing tasks on multiple machines. Among all the platforms nowadays, let it be cloud computing, distributed commerce that I have mentioned earlier, distributed file system, and the computing exercise between computer. In the case of distributed computing, what is the difference between traditional centralized distributed computing and decentralized distributed computing? What kind of logic do our platform and each project do from a software engineer perspective?
We talk about technology and its ultimate pursue but the fact is that it is already a relatively closed application scenario in the cognition and technological era. When people can’t breakthrough, when our technology, our application scenario reaches the edge, we innovate on the edge. To me, from the perspective of a distributed computing, when our computer is used as a computing application and when the number of them increased, we encounter bottlenecks.
The first problem that we have encountered is when a machine becomes 1,000 machines, or even 10,000 machines including hundreds of thousands of machines in blockchain, or what kind of computing problems are encountered when the computing is done all over the world? This is all a communication problem in which we will need to communicate and exchange information between those 10,000 computers. Here, the data synchronization is required between computer 1 and computer 10,000, otherwise, it is impossible to complete the complicated computing, which is also the challenge that we had encountered. Through this challenge, our computer has evolved over the years, two computers will communicate through RPC communication, and the computing is done through c/s mode. Instead of adding in the CPU and GPU to improve the computing power, we will need a computational model that is expanding horizontally.
Many people talk about decentralization and centralization. It is a political or commercial concept. From the engineering point of view, it is a two-way gradual process. From the process of distributed computing, you go from top to bottom and this makes no difference as going from bottom to top so this is not something to worried about.
The second problem is whereby the distributed computing fault-tolerant is less likely to occur when there is a problem with a machine. If there are multiple machines, there will always be a problem with one machine. It requires computing power to support the synchronization of the book and data, through a distributed fault-tolerant protocol to solve a problem of fault tolerance, we will need to know how to solve the problem of other computer errors. We will also need to re-aggregate task assignments, complete a huge computational problem through various modes, as well as traditional modes to accomplish a huge computational problem. Through distributed computing to distribute the storage of your CPU and GPU, such distributed computing is just in the storage meanwhile there are many innovations around.
Let’s take Bitcoin as an example. Bitcoin connected all of its nodes from the mining pool, mining machine, and wallet. When the digital assets are money, the transaction between Alice and Bob cannot be tampered with. The computing power of Bitcoin has already overweight supercomputer. It is similar to the galaxy computer that is currently lying on the National Defense University, all of the blockchain projects are building a supercomputer. This is the model that we designed, the architecture of the calculation engine, to ensure that we can’t be separated when doing e-commerce, business logic is eventually a business logic, the same goes for data and services. With all things combined, it will then create a big data, IPFS, and everything will constitute a distributed supercomputer.
Many people start from a single node, have the main network, strategy, and everything while ultimately hoping that our future applications, let it be Alibaba, Tencent, ICBC, or Citibank, their blockchain will eventually become a supercomputer. In enhancing their computing power and solving a computational problem, this computing problem is a distributed business application scenario for everyone. Today, I want to show you a real distributed computing, whereby all the distributed computing uses this method, with every project building its own distributed supercomputer like what I have mentioned earlier of the day. Overall, the future supercomputer of distributed commerce will have everyone participation in computing and mining.