White Paper

Bringing_commercial_technology_defense_age_ai_WP

Issue link: https://read.uberflip.com/i/1370335

Contents of this Issue

Navigation

Page 3 of 5

WHITE PAPER Bringing commercial technologies to defense in the age of AI mrcy.com 4 mrcy.com 4 For every 100 miles data travels, it loses speed of roughly 0.82 millisecond. — Avent, AI at the Edge, 2018 To tackle big AI workloads efficiently, edge processing systems must integrate the latest data center CPUs, storage, coprocessors (GPU, ASIC and FPGA), interconnects and architectures that are specifically designed and optimized for big data processing. However, creating purpose- built AI edge hardware is not as simple as selecting and packaging the latest components. Hardware must be architected to eliminate component bottlenecks, minimize latency and accelerate AI frameworks. Collaboration with technology leaders is critical to selecting and integrating the right components and technology stack. To this end, Mercury Systems and Intel have partnered for decades to enhance processing, packaging security and open software technologies, and make them ready for defense applications. OPTIMIZING PERFORMANCE WITH INTEL® TECHNOLOGIES Intel is a technology leader that supports heterogeneous architectures through a distinct portfolio of AI-facilitating solutions. This includes CPU, GPU, FPGA and purpose-built AI accelerators (ASICs) with optimized deep learning frameworks and toolkits that can be deployed on various hardware. Compared with dedicated systems, Intel Xeon® Scalable processors deliver greater cost-efficiency and flexibility. They can be re-provisioned for diverse workloads to increase server utilization, reduce total cost of ownership and maximize return on investment. Second Generation Intel® Xeon® Scalable processors with Intel® C620 series chipsets feature built-in Intel® Deep Learning Boost to increase training and inferencing performance. Each processor has a set of embedded accelerators that speed up dense computations characteristic of convolutional and deep neural networks (CNNs and DNNs). Intel® DL Boost and Intel® AVX-512 Vector Neural Network Instructions (VNNI) are designed to accelerate AI/deep learning workloads such as image classification, speech recognition, image recognition, language translation, object detection and other pattern manipulations. To support and accelerate AI application development, Intel has optimized software tools and frameworks widely used today such as Caffe*, TensorFlow* and MXNet* for neural networks and AI applications for Intel Xeon processor-based platforms. Additionally, Intel's new OpenVINO™ toolkit enables the creation and optimization of deep learning inference models and simplifies their deployment across multiple Intel platforms (CPU, processor graphics, FPGA and vision accelerator) — supporting implementations from cloud architectures to edge devices. This open source application offers the developer community flexibility and availability when formulating deep learning and AI solutions. Powered by the OpenVINO toolkit, Intel's Vision Processing Unit computes vision and AI inference algorithms to extract meaning from multi-modal sensor data and enhance deep learning performance. These reprogrammable FPGA accelerators can be integrated into Mercury 's servers, allowing developers the ability to implement algorithms on different types of mobile platforms and across edge applications. Mercury integrates Intel Xeon Scalable and other Intel ecosystem capabilities across a wide spectrum of environments and form factors, improving interoperability, affordability and reliability. Mercur y 's broad solution portfolio channels Intel 's AI empowering technologies to defense platforms. ARCHITECTURE Scale and sustain your application with cost-effective refreshes PLATFORM Create new types of functionality with every blade combination BLADES Tailor to your workload by selecting and configuring individual blades COMPONENTS Optimize performance with the latest datacenter-caliber commercial technologies Intel 's Xeon® Scalable processors with on-die AI accelerators are the gold standard in big data and AI processing.

Articles in this issue

Links on this page

view archives of White Paper - Bringing_commercial_technology_defense_age_ai_WP