White Paper

enabling-big-data-processing-and-ai-powered-everything-everywhere

Issue link: https://read.uberflip.com/i/1476467

Contents of this Issue

Navigation

Page 3 of 5

WHITE PAPER Enabling big data processing and AI-powered everything, everywhere mrcy.com 4 Nothing is more symbolic of the current state-of- the-art big data processing capability than Intel's Xeon Scalable processor with on-die AI acceleration. This data center processor uniquely features Deep Learning Boost, which extends Advanced Vector Extensions-512 (AVX-512) to accelerate inference applications like speech recognition, image recognition, language translation, object detection, and other pattern manipulations. Each processor has a new set of embedded accelerators (Vector Neural Network Instructions, or VNNIs) that speed up dense computations characteristic of convolutional neural networks (CNNs) and deep neural networks (DNNs). Mercury embeds these processors in fog and edge applications with optional BuiltSECURE systems security engineering and extreme environmental protection packaging. Miniaturization is created with system-in-package, wafer stacking and other 2.5D and 3D fabrication technologies Performance cooling – Effective and efficient conduction, air, liquid and hybrid cooling technologies for embedded processing are used to enable small form factor Xeon-powered packages to operate reliably, at full throttle, delivering unrestricted processing performance across wide temperature ranges. Platform fuel and refrigerants are used in liquid- cooled solutions and advanced air management approaches are used in air-cooled solutions instead of the traditional, less efficient air-blowing approaches used in data centers. Next-generation air, liquid and hybrid cooling for reliable full-throttle big data and AI processing at the edge Unrestricted fabrics – Many open system architectures have performance bottlenecks, especially between their module interconnects. New backplane channel and enhanced interconnect technologies enable unrestricted switch fabric performance, eliminating limitations to processing power and scalability across large systems and broad temperature ranges. Ethernet signal transmission "eye" taken from an enhanced OpenVPX subsystem backplane channel - A wider eye produces a reduced bit error rate To avoid vendor lock and quicken technology adoption and refreshes, Mercury technologies are compliant with open systems standards. When combined, they allow Intel Xeon Scalable and other cloud computing capabilities to be physically deployed in a wide spectrum of environments and form factors.

Articles in this issue

Links on this page

view archives of White Paper - enabling-big-data-processing-and-ai-powered-everything-everywhere