White Paper

Next Generation Airborne Video Servers

Issue link: https://read.uberflip.com/i/1173430

Contents of this Issue

Navigation

Page 2 of 7

Introduction This white paper describes the contemporary challenges facing all airborne integrators as they leverage next generation technology for airborne mission critical operations. Answering this challenge, this paper lays out a framework that enables on-platform video streaming to be smarter and more capable – ultimately aiding autonomous flight. The evolution of safety-critical airborne video switching Airborne video server computers are gaining in importance and criticality fueled by an increasing number of sophisticated sensors and cameras. Video servers control and process many of these sensor data streams and are the smart electronic switches that connect the streaming camera video to the pilot's and crew's visual displays. Pilots and aircrews rely on these displays to make decisions, selecting what to study, and from which sensor or multiple sensor stream compositions on-the-fly. They interact with these increasingly smart and intuitive displays to zoom, rotate and tag what they are viewing. Search-assist and rescue operations, typify the types of new missions that are driving the addition of more complex sensors and cameras to new and existing platforms. Early video servers were primarily switched video channels and performed video conversion, usually from legacy analog streams to different types of digital and/or analog formats. Today, there is a broad spectrum of video formats, both analog and digital and a spectrum of formats which continues to evolve, driven by the need for higher resolutions and faster frame rates. The ability for the video switch to stay relevant is provided through modularity that enables relatively easy hardware and programming capability refreshes allowing streaming data and compute elements to stay up-to-date. Figure 1. Some of the video functions now available on-platform Contemporary video servers Contemporary video servers are augmented with on-the-fly video processing, video composition, multichannel live recording, and the ability to handle a lot more channels. As the number of sensors increases, the video server must compose multiple views to a limited number of cockpit screens, typically through picture-by-picture (PbP), picture-in-picture (PiP) or by zooming or rotating the image. Processing 1 operations are performed by graphical processor units (GPUs) and have become the main processing engine of video servers. The GPU is a dedicated hardware processor, optimized to manipulate and accelerate the creation of images in a frame buffer, intended for output to a display device. Video servers capture streaming video and transforms it into a common format. They hold the captured video in fast, dedicated frame buffer video memory for near-real-time video processing, composition and switching. GPUs are programmable, typically with standard openGL video libraries which enables a lot of flexibility in regards to video composition. Efficient recording The convergence of GPU and CPU processing resources has enabled video servers to add recording capabilities to video switches, removing the need for dedicated recorders. Efficient video recording requires dramatic reductions in the video's storage footprint. Video compression algorithms, including MPEG2, H264 and H265 are able to reduce the quantity of video data while preserving its quality. In the case of H264, the compression ratio is about 300:1 while maintaining good image quality. High efficiency video coding (HEVC) or H.265 can double this compression again achieving a compression ratio of around 600:1. Contemporary processing devices used in modern video servers enable advanced recording functions to be accelerated with native hardware compression engines and through the sharing of common video buffering. Figure 2. Configurable and manage processing, switching and video merging functions via Arinc-429, serial or Ethernet protocols Once compressed, video streams are reduced, typically to 5-20Mbps data streams enabling hours of video to be stored on a single solid-state storage (SSD) drive or even an inserted USB thumb drive. TAG Zoom Out/In PiP, PbP, Quad Record, Replay Video 1 Video 2 PiP Output 3

Articles in this issue

view archives of White Paper - Next Generation Airborne Video Servers