VR design: How to achieve high integration of GPU and display

The VR industry is changing with each passing day. It seems that the mass production of VR equipment is just a matter of time. Of course, in the early days of development, VR will mainly appear in the form of high-end gaming equipment. However, its application area will certainly expand rapidly. Before VR becomes known, let us slow down and explore what problems it currently faces.

First, we need to be clear about the definition of delay: it refers to the system converting the actual movement of the head into the time you see the image on the screen of the VR helmet. These two events must be quite close, so that you can't perceive the time difference like the real world; if the delay is too long or fluctuating, then the immersive experience will be unnatural, and the brain will start the confrontation mechanism, let you Nausea or dizziness - this feeling is not good. Industry research has shown that the "motion to picture" delay must last less than 20 milliseconds (ms), otherwise it will not create a smooth and natural VR experience. Since the standard refresh rate is 60Hz, it means that the delay should be 16ms. Although this goal is not easy to achieve, it is not impossible to use the right method.

Reduce the delay, you need these tricks

When you combine some specific technologies, you can successfully create a low-latency VR system. First, let's discuss the front-end buffer rendering. Including Android devices, image applications usually use double buffering or triple buffering technology, allowing the GPU to map pixels to the rotating buffer of off-screen copying, and exchange with the on-screen buffer at the end of each refresh of the display, thus achieving smooth flow. Experience. This process can make the time difference of adjacent frames more uniform, but it also increases the delay - the opposite of what VR hopes to achieve. During front-end buffer rendering, the GPU can bypass the off-screen buffer and render the on-screen buffer directly, reducing latency. Front-end buffer rendering needs to be accurately synchronized with the display to ensure that GPU writes are always before the display is read. The Mali GPU's environment-priority extensions enable fast scheduling of GPU tasks, making the front-end buffered rendering process a higher priority than the less urgent tasks, improving the user experience.

Eliminate extra buffered rendering to reduce latency

The second important secret is to choose the right display type for your VR device. An organic light-emitting diode (OLED) display is a great tool for improving the VR experience. It works in a very different way from familiar and well-developed LCD displays. With a thin film transistor array at the back end, each pixel on the OLED display acts as a light source, while the LCD uses a white LED backlight. The brightness of the OLED display is determined by the current intensity flowing through the film. The color management is achieved by independently adjusting the red, green and blue LED small lights behind the screen. Therefore, the OLED can exhibit high brightness, high contrast, and high saturation. Degree of color. In addition, as long as you extinguish several parts of the screen, you can see a darker black than the LCD screen that blocks the backlight. Although this is usually a selling point for OLED screens, it is also critical for VR because partial illumination makes it easier to achieve lower persistence. The full afterglow display means that the screen is constantly lit, the view is only short-lived, but it quickly expires; the low-afterglow display lights up the image only when the view is correct, and then goes out. This process is difficult to detect at very high refresh rates, creating the illusion of a continuous image.

This principle is essential for reducing image blur. Low afterglow allows for greater flexibility, which means that the display can display multiple partial images in one refresh and adjust the intermediate frames based on the change data collected by the helmet sensor, so when the user's view sweeps across the screen The position of the head in the system also changes; this is not possible with a panoramic backlit LCD screen. Therefore, the key to achieving a low-latency VR experience is to use a time-distorting process to render the front-end buffer and drive the OLED screen in blocks or stripes. In this way, the image seen on the screen can adapt to the head rotation very quickly, and there is no other way to match it.

Asynchronous time warping technique

The key technology to be discussed next is the asynchronous time warping technique. Since the scene change of the immersive VR application is relatively flat, the amount of image change between the scenes is small and relatively easy to predict. Warping refers to displacing an image rendered by a previous head position to match a new head position. This process can separate the relationship between the application frame rate and the refresh rate to a certain extent, and achieve low latency of the system to meet specific application scenarios. This displacement reacts only to head rotation and is indifferent to head position or scene animation changes. Although time warping is also a stopgap measure, it is an effective security guarantee, and it can make the device running at 30FPS frame rate (at least in part) present the experience of tracking the user's head movement at a frame rate of 60FPS or higher.

Secret weapon of VR technology

In this article, we explored how to achieve deep integration between the GPU and the display, but this is only the tip of the iceberg. If we want to play a video (probably a DRM-protected video) and integrate system notifications, the problem becomes more complicated. High-quality VR support requires multimedia products with powerful synchronization capabilities and the ability to efficiently utilize bandwidth communications, not only to create the best experience for end users, but also to maximize power efficiency and performance. With high-performance tools such as ARM Frame Buffer Compression (AFBC) and ARM TrustZone, the ARM Mali Multimedia Suite (MMS) enables deep integration of GPU, video and display processors and is the leading tool for VR device development today.

Introduction

SCOTECH manufactures a full range of pad mounted distribution transformers, it consists of HV compartment, LV compartment and Transformer compartment, they are compact power system for utility, commercial and industrial applications. the Pad Mounted Transformer can be installed indoor or outdoor and designed to withstand most of the environments. it can also be designed specifically for the solar photovoltaic generator applications.

 

Scope of supply

Primary voltage up to 35KV

Rated power: up to 5MVA

 

Standards

SCOTECH`s pad mounted distribution transformers are designed and manufactured in accordance with all major international standards (IEC, ANSI, UL, etc.) 


Why SCOTECH

Long history- Focus on transformer manufacturing since 1934.

Technical support – 134 engineers stand by for you 24/7.

Manufacturing-advanced production and testing equipment, strict QA system.

Perfect service-The complete customer service package (from quotation to energization).


Pad Mounted Transformer

Pad Transformer,Pad Mounted Transformer,3 Phase Pad Mounted Transformer,Pad Mounted Distribution Transformer

Jiangshan Scotech Electrical Co.,Ltd , https://www.scotech.com

Posted on