AWS Inferentia

High performance at the lowest cost in Amazon EC2 for deep learning inference

AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications. 

The first-generation AWS Inferentia accelerator powers Amazon Elastic Compute Cloud (Amazon EC2) Inf1 instances, which deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable Amazon EC2 instances. Many customers, including Airbnb, Snap, Sprinklr, Money Forward, and Amazon Alexa, have adopted Inf1 instances and realized its performance and cost benefits.

AWS Inferentia2 accelerator delivers a major leap in performance and capabilities over first-generation AWS Inferentia. Inferentia2 delivers up to 4x higher throughput and up to 10x lower latency compared to Inferentia. Inferentia2-based Amazon EC2 Inf2 instances are designed to deliver high performance at the lowest cost in Amazon EC2 for your DL inference and generative AI applications. They are optimized to deploy increasingly complex models, such as large language models (LLM) and vision transformers, at scale. Inf2 instances are the first inference-optimized instances in Amazon EC2 to support scale-out distributed inference with ultra-high-speed connectivity between accelerators. You can now efficiently and cost-effectively deploy models with hundreds of billions of parameters across multiple accelerators on Inf2 instances.

AWS Neuron is the SDK that helps developers deploy models on both AWS Inferentia accelerators and run your inference applications for natural language processing/understanding, language translation, text summarization, video and image generation, speech recognition, personalization, fraud detection, and more. It integrates natively with popular machine learning (ML) frameworks, such as PyTorch and TensorFlow, so that you can continue to use your existing code and workflows and run on Inferentia accelerators.

Amazon Alexa adopts AWS Inferentia to cut cost of ML Inference

Benefits

High performance and throughput

Each first-generation Inferentia accelerator has four first-generation NeuronCores with up to 16 Inferentia accelerators per EC2 Inf1 instance. Each Inferentia2 accelerator has two second-generation NeuronCores with up to 12 Inferentia2 accelerators per EC2 Inf2 instance. Inferentia2 offers up to 4x higher throughput and 3x higher compute performance than Inferentia. Each Inferentia2 accelerator supports up to 190 tera floating operations per second (TFLOPS) of FP16 performance.

Low latency with high-bandwidth memory

The first-generation Inferentia has 8 GB of DDR4 memory per accelerator and also features a large amount of on-chip memory. Inferentia2 offers 32 GB of HBM per accelerator, increasing the total memory by 4x and memory bandwidth by 10x over Inferentia.

Native support for ML frameworks

AWS Neuron SDK integrates natively with popular ML frameworks such as PyTorch and TensorFlow. With AWS Neuron, you can use these frameworks to optimally deploy DL models on both AWS Inferentia accelerators with minimal code changes and without tie-in to vendor-specific solutions.

Wide range of data types with automatic casting

The first-generation Inferentia supports FP16, BF16, and INT8 data types. Inferentia2 adds additional support for FP32, TF32, and the new configurable FP8 (cFP8) data type to provide developers more flexibility to optimize performance and accuracy. AWS Neuron takes high-precision FP32 models and automatically casts them to lower-precision data types while optimizing accuracy and performance. Autocasting reduces time to market by removing the need for lower-precision retraining.

State-of-the-art deep learning capabilities


Inferentia2 adds hardware optimizations for dynamic input sizes and custom operators written in C++. It also supports stochastic rounding, a way of rounding probabilistically that enables high performance and higher accuracy compared to legacy rounding modes.

Built for sustainability


Inf2 instances offer up to 50% better performance/watt over comparable Amazon EC2 instances because they and the underlying Inferentia2 accelerators are purpose built to run DL models at scale. Inf2 instances help you meet your sustainability goals when deploying ultra-large models.

AWS Neuron SDK

AWS Neuron is the SDK that helps developers deploy models on both AWS Inferentia accelerators and train them on AWS Trainium accelerator. It integrates natively with popular ML frameworks, such as PyTorch and TensorFlow, so you can continue to use your existing workflows and run on Inferentia accelerators with only a few lines of code.

Learn More » 

AWS Trainium

AWS Trainium is an AWS-designed DL training accelerator that delivers high performance and cost-effective DL training on AWS. Amazon EC2 Trn1 instances, powered by AWS Trainium, deliver the highest performance on DL training of popular natural language processing (NLP) models on AWS. Trn1 instances offer up to 50% cost-to-train savings over comparable Amazon EC2 instances.

Learn More » 

Customer testimonials

Qualtrics

Qualtrics designs and develops experience management software.

"At Qualtrics, our focus is building technology that closes experience gaps for customers, employees, brands, and products. To achieve that, we are developing complex multi-task, multi-modal deep learning models to launch new features, such as text classification, sequence tagging, discourse analysis, key-phrase extraction, topic extraction, clustering, and end-to-end conversation understanding. As we utilize these more complex models in more applications, the volume of unstructured data grows, and we need more performant inference-optimized solutions that can meet these demands, such as Inf2 instances, to deliver the best experiences to our customers. We are excited about the new Inf2 instances, because it will not only allow us to achieve higher throughputs, while dramatically cutting latency, but also introduces features like distributed inference and enhanced dynamic input shape support, which will help us scale to meet the deployment needs as we push towards larger, more complex large models.”

Aaron Colak, Head of Core Machine Learning, Qualtrics

Print

Finch Computing is a natural language technology company providing artificial intelligence applications for government, financial services, and data integrator clients.

"To meet our customers’ needs for real-time natural language processing, we develop state-of-the-art deep learning models that scale to large production workloads. We have to provide low-latency transactions and achieve high throughputs to process global data feeds. We already migrated many production workloads to Inf1 instances and achieved an 80% reduction in cost over GPUs. Now, we are developing larger, more complex models that enable deeper, more insightful meaning from written text. A lot of our customers need access to these insights in real-time and the performance on Inf2 instances will help us deliver lower latency and higher throughput over Inf1 instances. With the Inf2 performance improvements and new Inf2 features, such as support for dynamic input sizes, we are improving our cost-efficiency, elevating the real-time customer experience, and helping our customers glean new insights from their data.”

Franz Weckesser, Chief Architect, Finch Computing

airbnb-case-study

Founded in 2008, San Francisco-based Airbnb is a community marketplace with over 4 million Hosts who have welcomed more than 900 million guest arrivals in almost every country across the globe.

"Airbnb’s Community Support Platform enables intelligent, scalable, and exceptional service experiences to our community of millions of guests and hosts around the world. We are constantly looking for ways to improve the performance of our Natural Language Processing models that our support chatbot applications use. With Amazon EC2 Inf1 instances powered by AWS Inferentia , we see a 2x improvement in throughput out of the box, over GPU-based instances for our PyTorch based BERT models. We look forward to leveraging Inf1 instances for other models and use cases in the future.”

Bo Zeng, Engineering Manager, AirBnB

Snap Inc
"We incorporate machine learning (ML) into many aspects of Snapchat, and exploring innovation in this field is a key priority. Once we heard about Inferentia we started collaborating with AWS to adopt Inf1/Inferentia instances to help us with ML deployment, including around performance and cost. We started with our recommendation models, and look forward to adopting more models with the Inf1 instances in the future.”

Nima Khajehnouri, VP Engineering, Snap Inc.

Sprinklr
"Sprinklr's AI-driven unified customer experience management (Unified-CXM) platform enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights – resulting in proactive issue resolution, enhanced product development, improved content marketing, better customer service, and more. Using Amazon EC2 Inf1, we were able to significantly improve the performance of one of our natural language processing (NLP) models and improve the performance of one of our computer vision models. We're looking forward to continuing to use Amazon EC2 Inf1 to better serve our global customers."

Vasant Srinivasan, Senior Vice President of Product Engineering, Sprinklr

Autodesk
"Autodesk is advancing the cognitive technology of our AI-powered virtual assistant, Autodesk Virtual Agent (AVA) by using Inferentia. AVA answers over 100,000 customer questions per month by applying natural language understanding (NLU) and deep learning techniques to extract the context, intent, and meaning behind inquiries. Piloting Inferentia, we are able to obtain a 4.9x higher throughput over G4dn for our NLU models, and look forward to running more workloads on the Inferentia-based Inf1 instances.”

Binghui Ouyang, Sr Data Scientist, Autodesk

Screening Eagle
“The use of Ground Penetrating Radar and detection of visual defects is typically the domain of expert surveyors. An AWS microservices based architecture enables us to process videos captured by automated inspection vehicles and inspectors. By migrating our in-house built models from traditional GPU-based instances to Inferentia, we were able to reduce costs by 50%. Moreover we were able to see performance gains when comparing the times with a G4dn GPU instance. Our team is looking forward to running more workloads on the Inferentia-based Inf1 instances.”

Jesús Hormigo, Chief of Cloud and AI Officer, Screening Eagle Technologies

NTT PC

NTTPC Communications is a network service and communication solution provider in Japan who is a telco leader in introducing new innovative products in the Information and communication technology market.

"NTTPC developed “AnyMotion", a motion analysis API platform service based on advanced posture estimation machine-learning models. NTTPC deployed their AnyMotion platform on Amazon EC2 Inf1 instances using Amazon Elastic Container Service (ECS) for a fully managed container orchestration service. By deploying their AnyMotion containers on Amazon EC2 Inf1, NTTPC saw 4.5x higher throughout , a 25% lower inference latency, and 90% lower cost compared to current generation GPU-based EC2 instances. These superior results will help to improve the quality of AnyMotion service at scale."

Toshiki Yanagisawa, Software Engineer, NTT PC Communications Incorporated

Anthem

Anthem is one of the nation's leading health benefits companies, serving the health care needs of 40+ million members across dozens of states. 

"The market of digital health platforms is growing at a remarkable rate. Gathering intelligence on this market is a challenging task due to the vast amounts of customer opinions data and its unstructured nature. Our application automates the generation of actionable insights from customer opinions via deep learning natural language models (Transformers). Our application is computationally intensive and needs to be deployed in a highly performant manner. We seamlessly deployed our deep learning inferencing workload onto Amazon EC2 Inf1 instances powered by the AWS Inferentia processor. The new Inf1 instances provide 2X higher throughput to GPU-based instances and allowed us to streamline our inference workloads.”

Numan Laanait, PhD, Principal AI/Data Scientist, Anthem
Miro Mihaylov, PhD, Principal AI/Data Scientist, Anthem

Condé Nast
"Condé Nast's global portfolio encompasses over 20 leading media brands, including Wired, Vogue, and Vanity Fair. Within a few weeks, our team was able to integrate our recommendation engine with AWS Inferentia chips. This union enables multiple runtime optimizations for state-of-the-art natural language models on SageMaker's Inf1 instances. As a result, we observed a 72% reduction in cost than the previously deployed GPU instances."

Paul Fryzel, Principal Engineer, AI Infrastructure, Condé Nast

Ciao
“Ciao is evolving conventional security cameras into high-performance analysis cameras equivalent to the capability of a human eye. Our application is advancing disaster prevention, monitoring environmental conditions using cloud-based AI camera solutions to alert before it becomes a disaster. Such alert enables reacting to the situation beforehand. Based on the object detection, we can also provide insight by estimating the number of incoming guests without staff from videos in brick and mortar stores. Ciao Camera commercially adopted AWS Inferentia-based Inf1 instances with 40% better price performance than G4dn with YOLOv4. We are looking forward to more of our services with Inf1 leveraging its significant cost efficiency.”

Shinji Matsumoto, Software Engineer, Ciao Inc.

欧文ベーシックロゴ(The Asahi Shimbun)
“The Asahi Shimbun is one of the most popular daily newspapers in Japan. Media Lab, established as one of our company's departments, has the missions to research the latest technology, especially AI, and connect the cutting-edge technologies for new businesses. With the launch of AWS Inferentia based Amazon EC2 Inf1 instances in Tokyo, we tested our PyTorch based text summarization AI application on these instances. This application processes a large amount of text and generates headlines and summary sentences trained on articles from the last 30 years. Using Inferentia, we lowered costs by an order of magnitude over CPU-based instances. This dramatic reduction in costs will enable us to deploy our most complex models at scale, which we previously believed was not economically feasible”

Hideaki Tamori, PhD, Senior Administrator, Media Lab, The Asahi Shimbun Company

CS Disco
“CS Disco is reinventing legal technology as a leading provider of AI solutions for e-discovery developed by lawyers for lawyers. Disco AI accelerates the thankless task of combing through terabytes of data, speeding up review times and improving review accuracy by leveraging complex Natural Language Processing models, which are computationally expensive and cost-prohibitive. Disco has found that AWS Inferentia-based Inf1 instances reduce the cost of inference in Disco AI by at least 35% as compared with today's GPU instances. Based on this positive experience with Inf1 instances CS Disco will explore opportunities for migration into Inferentia.”

Alan Lockett, Sr. Director of Research, CS Disco

Talroo
“At Talroo, we provide our customers with a data-driven platform that enables them to attract unique job candidates, so they can make hires. We are constantly exploring new technologies to ensure we offer the best products and services to our customers. Using Inferentia we extract insights from a corpus of text data to enhance our AI-powered search-and-match technology. Talroo leverages Amazon EC2 Inf1 instances to create high throughput Natural Language Understanding models with SageMaker. Talroo’s initial testing shows that the Amazon EC2 Inf1 instances deliver 40% lower inference latency and 2X higher throughput compared to G4dn GPU-based instances. Based on these results, Talroo looks forward to using Amazon EC2 Inf1 instances as part of its AWS infrastructure.”

Janet Hu, Software Engineer, Talroo

DMP
"Digital Media Professionals (DMP) visualizes the future with a ZIA™ platform based on AI (Artificial Intelligence). DMP’s efficient computer vision classification technologies are used to build insight on large amount of real-time image data, such as condition observation, crime prevention, and accident prevention. We recognized that our image segmentation models run four times faster on AWS Inferentia based Inf1 instances compared to GPU-based G4 instances. Due to this higher throughput and lower cost, Inferentia enables us to deploy our AI workloads such as applications for car dashcams at scale."

Hiroyuki Umeda, Director & General Manager, Sales & Marketing Group, Digital Media Professionals

Hotpot.ai

Hotpot.ai empowers non-designers to create attractive graphics and helps professional designers to automate rote tasks. 

"Since machine learning is core to our strategy, we were excited to try AWS Inferentia-based Inf1 instances. We found the Inf1 instances easy to integrate into our research and development pipeline. Most importantly, we observed impressive performance gains compared to the G4dn GPU-based instances. With our first model, the Inf1 instances yielded about 45% higher throughput and decreased cost per inference by almost 50%. We intend to work closely with the AWS team to port other models and shift most of our ML inference infrastructure to AWS Inferentia."

Clarence Hu, Founder, Hotpot.ai

SkyWatch
"SkyWatch processes hundreds of trillions of pixels of Earth observation data, captured from space everyday. Adopting the new AWS Inferentia-based Inf1 instances using Amazon SageMaker for real-time cloud detection and image quality scoring was quick and easy. It was all a matter of switching the instance type in our deployment configuration. By switching instance types to Inferentia-based Inf1, we improved performance by 40% and decreased overall costs by 23%. This is a big win. It has enabled us to lower our overall operational costs while continuing to deliver high quality satellite imagery to our customers, with minimal engineering overhead. We are looking forward to transitioning all of our inference endpoints and batch ML processes to use Inf1 instances to further improve our data reliability and customer experience."

Adler Santos, Engineering Manager, SkyWatch

Money Forward, Inc.

Money Forward, Inc. serves businesses and individuals with an open and fair financial platform. As part of this platform, HiTTO Inc., a Money Forward group company offers an AI chatbot service, which uses tailored NLP models to address the diverse needs of their corporate customers.

"Migrating our AI chatbot service to Amazon EC2 Inf1 instances was straightforward. We completed the migration within 2 months and launched a large-scale service on the Inf1 instances, using Amazon Elastic Container Service (ECS). We were able to reduce our inference latency by 97% and our inference costs by over 50% (over comparable GPU-based instances), by serving multiple models per Inf1 instance. We look forward to running more workloads on the Inferentia-based Inf1 instances.”

Kento Adachi, Technical lead, CTO office, Money Forward, Inc.

Amazon services using AWS Inferentia

Amazon Advertising

Amazon Advertising helps businesses of all sizes connect with customers at every stage of their shopping journey. Millions of ads, including text and images, are moderated, classified, and served for the optimal customer experience every single day.

“For our text ad processing, we deploy PyTorch based BERT models globally on AWS Inferentia based Inf1 instances. By moving to Inferentia from GPUs, we were able to lower our cost by 69% with comparable performance. Compiling and testing our models for AWS Inferentia took less than three weeks. Using Amazon SageMaker to deploy our models to Inf1 instances ensured our deployment was scalable and easy to manage. When I first analyzed the compiled models, the performance with AWS Inferentia was so impressive that I actually had to re-run the benchmarks to make sure they were correct! Going forward we plan to migrate our image ad processing models to Inferentia. We have already benchmarked 30% lower latency and 71% cost savings over comparable GPU-based instances for these models.”

Yashal Kanungo, Applied Scientist, Amazon Advertising

Read the news blog »

Alexa 8up logo
“Amazon Alexa’s AI and ML-based intelligence, powered by Amazon Web Services, is available on more than 100 million devices today – and our promise to customers is that Alexa is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs, which is why we are excited to use Amazon EC2 Inf1 to lower inference latency and cost-per-inference on Alexa text-to-speech. With Amazon EC2 Inf1, we’ll be able to make the service even better for the tens of millions of customers who use Alexa each month.”

Tom Taylor, Senior Vice President, Amazon Alexa

"We are constantly innovating to further improve our customer experience and to drive down our infrastructure costs. Moving our web-based question answering (WBQA) workloads from GPU-based P3 instances to AWS Inferentia-based Inf1 instances not only helped us reduce inference costs by 60%, but also improved the end-to-end latency by more than 40%, helping enhance customer Q&A experience with Alexa. Using Amazon SageMaker for our Tensorflow-based model made the process of switching to Inf1 instances straightforward and easy to manage. We are now using Inf1 instances globally to run these WBQA workloads and are optimizing their performance for AWS Inferentia to further reduce cost and latency.”

Eric Lind, Software Development Engineer, Alexa AI

Amazon Alexa
“Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. We deployed our image classification ML models on EC2 Inf1 instances and were able to see 4x improvement in performance and up to 40% savings in cost. We are now looking to leverage these cost savings to innovate and build advanced models that can detect more complex defects such as synchronization gaps between audio and video files to deliver more enhanced viewing experience for Prime Video members.”
 
Victor Antonino, Solutions Architect, Amazon Prime Video
Amazon Alexa
“Amazon Rekognition is a simple and easy image and video analysis application that helps customer identify objects, people, text, and activities. Amazon Rekognition needs high-performance deep learning infrastructure that can analyze billions of images and videos daily for our customers. With AWS Inferentia-based Inf1 instances, running Rekognition models such as object classification, resulted in 8X lower latency, and 2X the throughput compared to running these models on GPUs. Based on these results we are moving Rekognition to Inf1, enabling our customers to get accurate results, faster.”
 
Rajneesh Singh, Director, SW Engineering, Rekognition and Video

Videos

AWS re:Invent 2019: Watch Andy Jassy talk about silicon investment and Inf1
AWS re:Invent 2019: ML Inference with new Amazon EC2 Inf1 Instances, featuring Amazon Alexa
Lower the Cost of Running ML Applications with New Amazon EC2 Inf1 Instances - AWS Online Tech Talks
Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 
Start building in the console

Get started with machine learning in the AWS Console.

Sign in