Graviton3E marks Amazon's debut in high-performance computing
Aims to compete with established players like Intel
AWS is releasing new chips designed to enable high-performance computing (HPC) tasks like gene sequencing and weather forecasting.
According to Amazon's cloud computing division, soon users will be able to rent processing power on Graviton3E chips, the newest iteration of AWS' Graviton line.
In an interview with Bloomberg, Peter DeSantis, a senior vice president who oversees most of AWS's engineering teams, said the new chip is a springboard for making HPC more accessible.
DeSantis said Graviton3E will be twice as capable as current versions in one type of computations needed by high-performance computers. The new product will be 20% better than the old one when used in conjunction with other AWS technologies.
During the keynote at this year's Re:Invent conference, DeSantis noted that Graviton3E worked best when used for financial modelling and the life sciences.
The Graviton3E chip is optimised for workloads requiring vector and floating-point data, which are typical in HPC, notably in research involving weather forecasting, biological sciences, chemistry, materials science and finance.
Graviton3E chips will support new EC2 instances, like the upcoming HPC7G instances for HPC workloads with 200GB of dedicated network bandwidth. Aside from that, Amazon has not provided any information on the availability of services based on the new chip.
Graviton3E is Amazon's most recent push to produce more of the hardware that equips AWS' large data centres. Running workloads on Amazon's in-house processors is one way the company can reduce expenses for its customers.
Amazon purchased Annapurna Labs in 2015 as part of its efforts to create its own ARM-based semiconductor designs. Since then, the company has been expanding its chip capabilities.
AWS CEO Adam Selipsky introduced the second iteration of the Inferentia chip on Tuesday. This chip allows a system to learn from enormous arrays of data, a process known as 'inference'.
This puts Amazon in direct competition with GPUs made by Nvidia, presently considered to be the industry standard for AI processing.
The Inferentia2 is designed to be able to manage larger volumes of data than its predecessor. This makes it possible to do things like recognising and interpreting human speech, as well as generate images using software.
Intel has maintained a commanding lead in the server chips market throughout the most of the industry's history, but AMD has amassed a sizeable share of the sector in recent years. At that same time, Nvidia GPUs are the go-to choice for many companies when it comes to running AI systems and completing other complex jobs.
Ampere Computing, a business focused on developing server chips that has had its silicon adopted by Microsoft Azure and Google Cloud, is adding to the competition.
AWS thinks there is a chance for it to profit as well and that by lowering prices, it can entice more companies and researchers to use AWS for their HPC requirements.
"The reason that high-performance computing isn't big is it's hard," DeSantis told Bloomberg.
"It's hard to get capacity, it's hard to get time on that supercomputer. What we're excited about is bringing the capabilities of high-performance computing to more workloads."