White Paper

Managing life cycle and network interoperability challenges on Navy platforms

Issue link: https://read.uberflip.com/i/1173428

Contents of this Issue

Navigation

Page 1 of 6

w w w. t m s . m r c y. c o m WHITE PAPER 2 Our goal is to adapt these commercial-style strategies to define and deliver an enduring and flexible shipboard combat systems compute, I/O, and network architecture based on an open, scalable, and extensible modular designed for reusability, composability, and synchronous technology insertion. The Navy would then be more able to avoid high costs on future systems by abstracting contemporary technologies from shipboard infrastructures, maximizing commonality across the ship's various networks and data centers, while minimizing non-value-add activities and redundancies. Leveraging commercial market success For Navy, and indeed DoD-wide, embedded computing systems we propose the adoption of strategies that have had profound and positive effects within three totally different commercial markets – social media, shipping containers, and automotive manufacturing. Social media: HyperScale Cloud, Open Compute Project® and Open 19 Foundation According to Facebook "Loading a user's home page typically requires accessing hundreds of servers, processing tens of thousands of individual pieces of data, and delivering the information selected in less than one second." What is astounding is the fact that there are more than 2 billion Facebook members and more than 1.1 billion daily active users, 84.5 percent of whom are outside of North America. To achieve this quality of service, Facebook, LinkedIn, and virtually every contemporary data center relies on HyperScale Spine and Leaf Interconnect fabrics for low-latency, high multipath bandwidth and extreme resilience to hardware failures. The ability to provide continuous and consistent service level agreements (SLAs) for each customer – irrespective of the millions of other simultaneous users – fuels these organizations' success. Along these lines, Facebook adopted a data center strategy facilitated by HyperScale fabrics and the Open Compute Project (OCP). OCP is "a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on [large scale] compute infrastructure." According to Jay Parikh, VP Infrastructure Engineering at Facebook, between 2011 and 2014, "Facebook saved more than $1.2 billion by using Open Compute designs to streamline its datacenter and servers… marginal gains, compound dramatically over time..." LinkedIn adopted a similar approach for small to medium-sized data centers called Open19 Foundation. While LinkedIn currently operates more than 150,000 servers, their smallest data center instantiation is 16 servers. Problem domain/landscape In our view, the plethora of disparate computing elements within Navy systems expands the number of combinations or permutations by the factorial of the number of those individual unique elements. Worse yet, each system has its own logistics tail and support requirements. Each undergoes expensive certifications to virtually the same extended operational environment – each system independent of the others. With this approach, the Navy loses the ability to leverage a truly common processing or common display system, not just across the ship or platform, but also across the entire fleet. Economies of scale are not achieved and significant costs are hidden and locked in during initial procurement of the system. Solution To that end, we propose a Navy standard Hyper Infrastructure – a tactical system architecture based on a common modular equipment rack; HyperComposable compute, storage, networking, graphics, and special function modules; and HyperScale Leaf and Spine Interconnect Fabric, as the foundation for any number of mission-critical weapons, combat, C4ISR, and machinery control systems. These systems will be composed of a relatively small set (m) of common compute, storage, and I/O modules (as building blocks) for (n) systems of virtually any size and scope – where (m) is potentially orders of magnitude smaller than (n). For existing legacy network topologies, these HyperComposable systems can be deployed just like any contemporary commercial rack-mount or bladed server system. For new construction and extensive technology insertion cycles, the adoption of a high-performance (100Gb+), low latency (<3μS end-to-end), highly resilient Infiniband or Ethernet fabrics as the core network will further enable the concept of the disaggregated system, and may remain viable for half (or more) of the life of the ship or platform. Beyond total cost of ownership Total cost of ownership calculations are often applied to technology platforms, but they seldom factor in the parallel paths of environmental qualifications, software certifications, shipboard industrial work, spares requirements (relative to every other system on the platform), training and support costs. Moreover, calculations often do not consider the difficulty of implementing and performing unforecasted technology refresh and regular periods of technology insertion that may take the ships' systems offline for extended periods of time. When compared with large-scale commercial enterprises (Amazon Web Services, Microsoft Azure Cloud, Netflix, Google, Facebook, et-al) one can see a radically different strategy.

Articles in this issue

view archives of White Paper - Managing life cycle and network interoperability challenges on Navy platforms