Installation Guide

Big data Cumulus-Linux installation guide

Issue link: https://read.uberflip.com/i/1123828

Contents of this Issue

Navigation

Page 5 of 26

B I G D AT A AN D C U MU L U S L I NU X : V AL I DA T E D DE SI G N GU I D E 6 Figure 3. Modern Layer 3 Clos "Spine Leaf" Architecture Equal-Cost Multipath or ECMP is used to send traffic across all available uplinks and spines. Standard routing protocols such as OSPF or BGP provide a simple failure detection mechanism, and route failures. Traditional Layer 2 designs using VLANs should be avoided. These designs are brittle by nature with a coarse failure domain involving half of the fabric. A traditional Layer 2 fabric also often involves proprietary protocols. You may be required to send traffic across a core or backbone to another pod or cluster which will create non-deterministic latency and higher latency for some traffic. To optimize network performance, you can try to run workloads locally where possible. You can leverage the Prescriptive Topology Manager (PTM) and LLDP in Cumulus Linux to map out a complete blueprint of all physical connectivity to eliminate the issues driven from manual cabling or unreachability concerns, as well as extract a rack-aware topology through a simple script for the NameNode. However, location becomes less important from a performance consideration when using a leaf and spine topology. Scaling Out The advantage of a Layer 3 Clos network architecture is that you can add additional spine switches as needed to scale horizontally. You can add up to 6 uplinks per leaf switch in a Layer 3 environment, whereas in a Layer 2 environment you are limited to 2 uplinks per access switch. Figure 4. Adding Additional Switches

Articles in this issue

view archives of Installation Guide - Big data Cumulus-Linux installation guide