Over the past decade, machine learning, particularly deep learning methods based on artificial neural networks, has made a number of remarkable advances that have improved our ability to build more accurate systems in a wide range of fields, including computer vision, speech recognition, language translation, and natural language understanding tasks.
Today, with the emergence of GPU, FPGA, XPU and other AI computing servers, the computing power of AI has been greatly improved, and the selection of algorithm framework has become an important factor to optimize the computing efficiency of AI. At the same time, due to the development of AI computing system from single-machine single-card to single-machine multi-card, and then to multi-machine multi-card parallel computing, data centers need to simultaneously manage a large number of AI computing servers to support applications. How to better manage and monitor will also affect the output efficiency and operation cost of AI application.
Machine learning, particularly deep learning, is driving the development of artificial intelligence (AI). The need for more efficient hardware acceleration of AI/ML/DL is recognized in academia and industry. This year, we’ve seen a growing number of players, including some of the world’s top semiconductor companies, as well as some startups, and even tech giants like Google and Microsoft, getting in on the act.
As reported by foreign media Axois, Google has made significant progress in the development of its own processor, and recently its self-developed SoC chip has been successfully streaming chips.
It is reported that this chip is jointly developed by Google and samsung, and manufactured by 5nm process. It is an 8-core CPU cluster with “2 + 2 + 4” three architectures, as well as a GPU equipped with the new ARM architecture. At the same time, it integrates Google Visual Core AI vision processor on ISP and NPU. This allows Google’s terminal chips to better support AI technologies, such as greatly improving the interactive experience of Google assistants.
Google’s SoC processor is expected to be the first to be deployed in the next generation Pixel phone and the Google Chromebook.
Google of the move is seen as the research for apple processor mode, from the “native system + the most mainstream flagship chip” into a chip “+” native system since the inquiry, Google was certainly not only is to want to get rid of the muzzle qualcomm chip, more important is to think through the grinding chip to achieve a better combination of software and hardware, and make the android system on its own hardware play a more important performance advantages.
Until the introduction of the Tensor Processing Unit (TPU) processor on Google, most of the machine learning and image-processing algorithms were built on general-purpose chips like gpus and fpgas. Google, which proposed the deep learning open source framework TensorFlow, makes such a special chip designed for the TensorFlow algorithm.
Microsoft hopes to expand the popularity of its Azure cloud platform with a new computer chip designed for the age of artificial intelligence, reports arsTECHNICA. From now on, Microsoft will provide Azure users with chips made by Graphcore, a British start-up.
Benchmark tests published by Microsoft and Graphcore showed that the chip’s performance was as good as that of nvidia’s and Google’s top-tier AI chips. Using code written specifically for the Graphcore hardware might make it more efficient. They claim that some image-processing tasks are many times faster than their competitors on the Graphcore chip. They also claim to be able to train BERT, a popular AI language-processing model, to be as fast as any existing hardware.
BERT is important for AI applications involving language processing. Google recently said it was using BERT to power its core search business. Microsoft is currently using the Graphcore chip for internal artificial intelligence research projects involving natural language processing.
Unlike most of the chips used in AI, the Graphcore’s processor was designed from scratch to support applications such as machines recognizing faces, understanding speech, parsing languages, driving cars and training robots in computing.
Graphcore is expected to attract corporate customers who run business-critical operations on AI, such as self-driving car companies, trading companies, and services that use AI to process large amounts of video and audio. Not only industry customers, they also expect developers working on next-generation AI algorithms to explore the platform’s advantages.
Using algorithms written for rival platforms, the chip can match or exceed the performance of NVIDIA and Google’s top AI chips, according to benchmark tests published by Microsoft and Graphcore. Code written specifically for Graphcore hardware might be more efficient.
WIMI Hologram Cloud:
With the mission of “vision is vision”, WIMI Hologram Cloud (Nasdaq:WIMI) has established the world’s top, self-developed deep learning platform and supercomputing center, and developed a series of AI technologies, including: face recognition, image recognition, text recognition, medical image recognition, video analysis, unmanned driving and remote sensing, etc. The development of holographic 3D face recognition software is based on WIMI’s holographic imaging feature imaging detection and recognition technology, template matching holographic imaging detection technology, and video processing and recognition technology based on deep learning and training. The traditional 2D facial recognition technology is a recognition technology based on facial features. It captures information from facial images or facial video streams and automatically detects and tracks the target face. WIMI’s holographic 3D facial recognition technology is a combination of holographic image capture and 3D portrait recognition technology.
WIMI focuses on the development and application of software technologies and owns AI, machine recognition technology, machine learning, model theory and video imaging processing technology. Holographic 3D facial recognition technology is a combination of structured light and infrared light, which can collect more than 30,000 feature points. Traditional 2D facial recognition technology collects fewer than 1,000 feature points. Moreover, 3D technology is less affected by the surrounding environment, which is expected to overcome many problems found in traditional 2D facial recognition technologies, such as light, posture, occlusion, dynamic recognition and facial expression.
The digital application of WIMI has been extended to all walks of life in the digital exhibition hall, including holographic shopping experience, holographic live broadcast, holographic press conference, holographic government thematic exhibition hall, holographic online holographic exhibition application, and holographic IP business exhibition. For example, the digital exhibition hall of memorial hall combines a variety of multimedia interactive exhibition items and themes with high-tech audio-visual display methods to give visitors a more direct impression. The digital exhibition hall of science and technology hall reproduces the past history, people and objects, so that visitors can experience an intriguing journey of science and technology culture as if going back in time. The digital exhibition hall of enterprise pavilion integrates enterprise culture into digital multimedia exhibition and digital content display, and shapes the interactive digital exhibition hall of enterprise pavilion with distinctive corporate personality.
The holographic AR industry is technology-intensive. Holographic AR experience can only be realized through the combination of hardware and software technologies, and technological advances related to holographic AR will bring the holographic AR experience to the next stage. For example, breakthroughs in deep learning AI technology will enable holographic AR devices to integrate content captured by cameras and simulated by computers in a more seamless way, providing users with a more immersive experience. In addition, the development of integrated chips will enable image processors to be produced at a lower cost, thereby reducing the selling price of holographic AR devices. The widespread adoption of 5G networks will enable real-time data transmission between local devices and the Internet, greatly increasing the diversity of content.
5G is a key network infrastructure for the future and an important support for the new generation of digital economy. The construction of 5G opens up a new space for the development of the digital economy. The current popularity of 5G has far exceeded the scope of the information and communication field. The advent of the era of 5G will meet people’s demand for network connection with huge traffic, connection number of super multiple devices and ultra-high mobility, which can greatly change the way of production and life of human beings and push people’s mobile broadband experience to a new height.
In recent years, the boom in artificial intelligence has deeply impacted the market for computer chips, with graphics chips with hundreds of simple processing cores able to perform parallel digital computing more efficiently. With the development of 5G, these evolving technologies will cooperate with each other to create new immersive experiences for consumers, solve challenges and create opportunities for enterprises and industries, and ultimately achieve intelligent integration. The world’s top hardware and software Internet technology companies will embrace huge opportunities.
Company Name: OBNewsOnline Inc
Contact Person: Matt Smith
Email: Send Email
Phone: (+44) 20 8383 1211
Address:218-993 Harold Street
City: London W2E 3LT
Country: United Kingdom