Machine Learning - eBook (EN)

IDC whitepaper: Accelerate Machine Learning Development to Build Intelligent Applications Faster

Issue link: https://read.uberflip.com/i/1444470

Contents of this Issue

Navigation

Page 15 of 18

Document #US43529118TM ©2020 IDC. www.idc.com | Page 16 IDC White Paper | Accelerate Machine Learning Development to Build Intelligent Applications Faster Amazon EC2 G4 instances deliver the industry's most cost- effective and versatile GPU instance for deploying machine learning models in production. G4 instances provide the latest-generation NVIDIA T4 GPUs, up to 100Gbps of networking throughput, and up to 1.8TB of local NVMe storage. petaflop of mixed-precision performance per instance to significantly accelerate machine learning applications. Amazon EC2 P3 instances have been proven to reduce machine learning training times from days to minutes. NVIDIA V100 Tensor Core is the first Tensor Core GPU brought to market, and it is built to accelerate AI, high-performance computing (HPC), data science, and graphics. It's powered by NVIDIA Volta architecture, comes in 16GB and 32GB configurations, and offers the performance of up to 32 CPUs in a single GPU. Data scientists, researchers, and engineers can now spend less time optimizing memory usage and more time designing the next AI breakthrough. Amazon EC2 G4 instances deliver the industry's most cost-effective and versatile GPU instance for deploying machine learning models in production. G4 instances provide the latest-generation NVIDIA T4 GPUs, up to 100Gbps of networking throughput, and up to 1.8TB of local NVMe storage. G4 instances are offered in different instance sizes with access to one GPU or multiple GPUs and different amounts of vCPU and memory — giving developers the flexibility to pick the right instance size for their applications. G4 instances are optimized for machine learning application deployments (inference), such as image classification, object detection, recommendation engines, automated speech recognition, and language translation that push the boundary on AI innovation and latency. The NVIDIA T4 GPU accelerates diverse cloud workloads, including high-performance computing, deep learning training and inference, machine learning, data analytics, and graphics. Based on the NVIDIA Turing architecture and packaged in an energy-efficient 70W, small PCIe form factor, T4 is optimized for mainstream computing environments and features multi-precision Turing Tensor Cores and new RT Cores. Combined with accelerated containerized software stacks from NGC, T4 delivers revolutionary performance at scale. Challenges and Opportunities COVID-19 is leading every organization to examine and understand its business processes and methods of doing business. The reasons for this are many, but the bottom line is that organizations that are not open to changing the way they do business may not survive this incredibly complex business cycle. Organizations need to understand where AI and deep learning technology will deliver the best business benefits. They also need to understand what skill sets are needed to build and deploy intelligent, AI-enabled applications. Finally, organizations need to reassess what tools, infrastructure, and environments are needed to put these intelligent, AI- enabled applications to use, especially with all the changes that have occurred in the development and deployment of deep learning models over the past two years.

Articles in this issue

Links on this page

view archives of Machine Learning - eBook (EN) - IDC whitepaper: Accelerate Machine Learning Development to Build Intelligent Applications Faster