White Paper

Managing life cycle and network interoperability challenges on Navy platforms

Issue link: https://read.uberflip.com/i/1173428

Contents of this Issue

Navigation

Page 5 of 6

w w w. t m s . m r c y. c o m WHITE PAPER 6 Hyperscale Interconnect Fabric HyperScale, Leaf-and-Spine (i.e., ECMP) fabrics (Figure 4) are highly modular designs with a well-defined, regularly repeating topology. Within an ECMP or OSPF fabric, traffic is split across many available paths rather than pushed onto a smaller number of higher speed paths. HyperScale is capability rather than size – it has the advantage of using small open switches in an architecture that scales to any size without changing the building blocks. Critically, fabric performance is quantifiable in regular mathematical terms - even if you don't know why a particular fabric performs a certain way (theoretically), you can still know how it will perform under specific conditions. The most common case of this capacity is calculating the oversubscription rate on the fabric; e.g., the total amount of traffic the network can switch without contention. Spine (Fabric Switches) Leaf Switches Figure 4. High Performance Enterprise Class Switched Fabric: Features include 1. All End-Points Equidistant from One Another, 2. Low Latency, Zero Jitter, Non-Blocking, 3. Zero Packet Loss, Wire Rate Performance at All Packet Sizes and Port Combinations, 4. Predictable Performance, Fairly Dividing Traffic in All Scenarios, 5. Better Buffering with predictable Buffer Allocation to Any Port & Packet Size When properly designed, minimum and maximum delays (i.e., jitter) across the fabric can effectively be determined. It then becomes straightforward to determine at what level a fabric is going to introduce buffering (latency) as a result of link contention. From a combat system network design perspective, this is a fabric's crucial defining characteristic. To paraphrase economist and technology visionary George Gilder, this key attribute enables the disintegration of monolithic machines across the fabric into a set of special-purpose appliances. The resulting appliances (or- modules) can be recombined as building blocks to form the various functions of each combat, C2, or machinery control system. Leveraging the fabrics to build an organic computing utility grid Hyperscale fabrics will function as an organic shipboard interconnect utility grid with known and predictable properties. This grid will enable distribution of a common modular infrastructure to equipment rooms across the ship and facilitate easy deployment of modules to bays, within these subracks, without the need to tune the geolocation of hardware or applications. Module specifications will define a common electromechanical configuration, with predefined kinetic and thermal resilience properties. Lastly, the grid supports the composition of (n) systems from a common hardware library of (m) appliance modules (server, storage, I/O). A shipwide, flexible fabric (hyperscale) infrastructure scales simply by adding paths with equal performance that can scale in any dimension. This arrangement represents a structured, uniform, future-proof topology and a simple path for growth. Power and cooling are optimized as well by enabling mixed loads for more efficient use of spaces (e.g., colocating server, storage, console controllers). Hypercomposibility and the CMER Hypercomposability begins with a CMER. Once the interconnect fabric is in place, a set of common equipment racks can be predeployed to any equipment room or space with fabric end points. These racks might comprise two or more identical (unpopulated) common modular subracks and redundant 100 Gb leaf switches. Such a proposed shipboard data processing utility will include: 1. CMERs providing identical power distribution, cooling (power in and power out), and kinetics mitigation systems (shock/vibration). 2. Within the CMER, two CMSRs can provide a 14U "hotel" space for 20 common processing, I/O, storage, graphics, attached coprocessors, and specialized network-based functions (e.g. WAN fabric extender), and dual interconnect switch planes. 3. Within the CMSR, each module slot or bay will be connected via dual 128 Gbps PCI Express links to two programmable ExpressFabric interconnect planes within the CMSR. 4. Switch planes connect all bays in one or two nonblocking PCI Express "Clos Fabrics" for a guaranteed, available bisection bandwidth in excess of 2 Tb/s. It's expected this architecture will be completely compatible with Gen4 PCI Express and/or future high- speed interconnects. 5. CMSR switch planes facilitate a resilient, nonblocking PCIe-speed (128 Gb/s) IP network between processing modules within the common modular subrack without the need for an external switch. 6. Each CMSR slot will accept any common module in any configuration required for the larger system function – the switch planes are programmable such that each 7U subrack can host (10) high-performance server modules, or a combination of server modules and storage, I/O, or attached coprocessors. Switch fabrics within two subracks may also be directly interconnected for added flexibility.

Articles in this issue

view archives of White Paper - Managing life cycle and network interoperability challenges on Navy platforms