WHITE PAPER
Accelerating Big Data Processing and AI at the Edge
mrcy.com 7
The Mercury Systems logo and the following are trademarks or registered trademarks of Mercury Systems, Inc.: Mercury Systems, Innovation That
Matters, and BuiltSECURE. Other marks used herein may be trademarks or registered trademarks of their respective holders. Mercury believes
this information is accurate as of its publication date and is not responsible for any inadvertent errors. The information contained herein is subject
to change without notice.
© 2021 Mercury Systems, Inc. 8083.00E-0921-wp-AcceleratingAI_Edge
MADE IN USA
Corporate Headquarters
50 Minuteman Road
Andover, MA 01810 USA
+1 978.967.1401 tel
+1 866.627.6951 tel
+1 978.256.3599 fax
International Headquarters
Mercury International
Avenue Eugène-Lance, 38
PO Box 584
CH-1212 Grand-Lancy 1
Geneva, Switzerland
+41 22 884 51 00 tel
Learn more
Visit: mrcy.com/nvidia
In any communications network, roughly 80% of a node's
battery power will be consumed by receive/transmit
functions. AI at the edge significantly reduces data
communications volume by identifying and transmitting
only potentially significant portions of a sensor data stream,
preserving battery power.
AI on GPUs offers other potential ways to enhance 5G
efficiency. NVIDIA's Aerial SDK is just one example of using
GPUs directed by AI to speed up broadband signal processing.
Beyond that, advanced self-organizing networks (SONs) may
soon use AI to continually optimize 5G communications as
radio frequency (RF) links between nodes face changes in
effective bandwidth or are eliminated completely.
The symbiotic relationship between all three technologies
offers vast potential for both newly designed edge-based
applications and continued performance enhancements.
INTERTWINED AT THE EDGE: GPUS, AI AND 5G
GPU semiconductors, AI algorithms and 5G networking are a set
of intertwined technologies. Together they are enabling a new
generation of edge applications.
GPUs are an excellent processing platform for AI-based
applications, implementing parallel processing of input data
streams using a single instruction, multiple data (SIMD) model.
This architecture makes GPUs extremely efficient at executing
AI deep learning algorithms, which perform the same operation
on many segments of input data.
As new 5G networks connect thousands or millions of nodes,
GPUs running AI can move intelligence to the edge. However,
most of these deployed edge nodes will have a limited power
budget. While NVIDIA has been steadily reducing GPU power
requirements, it is still an issue. Fortunately, AI can be used to
reduce the overall power needs of 5G nodes.
Traditional Communications Network GPU Power Requirements vs AI
Roughly 80% batter y power will be consumed
by receive/transmit functions
AI at the edge significantly reduces volume
of data, preserving batter y power