White Papers

Understanding OPNFV

Issue link: https://read.uberflip.com/i/1344850

Contents of this Issue

Navigation

Page 112 of 144

Understanding OPNFV 112 provide a report based on SNIA's (Storage Networking Industry Association) Performance Test Specification. The project measures latency, throughput, and IOPS (IO per second) across different block sizes and queue depths (number of outstanding I/Os in flight). Getting down to actual steps, a Docker container with the Storperf test APIs is invoked on the jumphost. Automated tests use these APIs to spin up volumes and VMs, connect them, run a variety of storage tests, and collect the results. Storperf is suitable for both HDD and SSD storage. The methodology of using a Docker container for test tooling is common across test projects. Finally, Storperf can also be launched in standalone mode or through Yardstick. Qtip Remember benchmarks such as MIPS or TPC-C, which attempted to provide a measure of infrastructure performance through one single number? Qtip attempts to do the same for NVFI compute (storage and networking part of roadmap) performance. Qtip is a Yardstick plugin that collects metrics from a number of tests selected from five different categories: integer, floating point, memory, deep packet inspection and cipher speeds. These numbers are crunched to produce a single Qtip benchmark. The baseline is 2,500, and bigger is better! In that sense one of the goal of Qtip is to make Yardstick results very easy to consume. Bottlenecks Wouldn't it be great to find performance system limitations (in other words, bottlenecks) in a staging environment rather than a production environment? That's exactly what the Bottlenecks project does. Bottlenecks is integrated with Yardstick. As opposed to Qtip, where the goal is to create a new benchmark, here the goal is to use a variety of existing benchmarks and metrics to measure whether the network, storage, compute, middleware and app performance meets a user's requirements or not. The entire process is driven off "experiment configuration files" that are set up by the user. Bottlenecks drives its activity based on these files and fully automates setting up the infrastructure, creating workloads, running tests, and collecting results. The data collected from these tests tends to be quite large, so the project also invests in analytics and visualization tools. The results help identify the metric(s) that do not meet requirements, in turn enabling the user to make decisions such as hardware selection or software tuning, protocol selection, and so on, and the results assist in evaluating compliance with SLAs. Dovetail As we saw in Chapter 3, one of the primary benefits of using open source software is to avoid

Articles in this issue

Links on this page

view archives of White Papers - Understanding OPNFV