Amid the coronavirus pandemic chaos, semiconductor chip making firm Nvidia Corp announced the launch of a new chip which can be split up digitally to allow the running of more than one programs on just one physical chip. This was the first of its kind product from the company and is set to rival similar technology from Intel Corp which has had developed this key capability earlier.
The logic for development of this chip called A100, according to the Santa Clara, California-based company, is very Simply: “help the owners of data centers get every bit of computing power possible out of the physical chips they purchase by ensuring the chip never sits idle.” This is the same principles that has helped drive the growth of cloud computing over the past two decades and has allowed Intel to build up its huge data centre chip related business.
A full physical server inside the data centres of cloud computing provider such as Amazon.com or Microsoft Corp is not rented by software developers looking to get additional computing power. Rather the software developers rent a software-based slice of a physical server called a “virtual machine.”
The development of the concept of such virtualization technology was based on the realization of the software developers that servers that were powerful and pricey were often functioning well below their full computing power or capacity. The solution was to cram more software on to the physical machines by the developers by slicing the machines into smaller virtual ones. That is something similar to the puzzle game Tetris.
This new trend of wringing every bit of computing power from their hardware and then selling that power to millions of customers or developers has been the secret behind the hugely profitable business of cloud computing for the likes of Amazon and Microsoft.
However this availability of this technology was mostly limited to the availability of such processor chips from Intel and similar chips such as those from Advanced Micro Devices Inc.
The new A100 chip can be split into seven “instances”, Nvidia said on Thursday while making the announcement.
A practical problem is also solved by this development for Nvida. The market for chips required for artificial intelligence tasks, for which Intel makes and sells chips, breaks into two parts. A “training” part of AI requires a powerful chip to do tasks such as analyzing millions of images to train an algorithm to recognize faces. However, after the completion of the training part for AI, only a fraction of the computing power to scan a single image and spot a face is needed for the “inference” tasks.
Nvidia is hoping the A100 can replace both – as this chip can be used both as a big single chip for training and can be split into smaller chips for inference”tasks.
“Because it’s fungible, you don’t have to buy all these different types of servers. Utilization will be higher,” said Intel Chief Executive Jensen Huang. “You’ve got 75 times the performance of a $5,000 server, and you don’t have to buy all the cables.”
(Adapted from Reuters.com)