LEARN MORE


Not Real, But Close Enough

Virtual reality technology is putting more data on large screens. And it's better data: higher resolution, easier access, different viewpoints, more accurate tracking, and affordable.

Virtual reality technology is putting more data on large screens. And it’s better data: higher resolution, easier access, different viewpoints, more accurate tracking, and affordable.

Automotive was an early adopter of virtual reality (VR). “Any time they can mock up something virtually instead of physically, they save a ton of time and a ton of money,” says Bill Schmidt, market development manager for transportation for Barco Simulation USA (Xenia, OH; www.barco.com). Medical research was also an early adopter. From those two industries, VR was improved for movies and related, then further optimized for speed and affordability in the quest for more realistic video games in the consumer electronics market.

Along the way, VR has physically changed. The price of computer systems has plummeted, while the processing power of workstations has skyrocketed. Graphics processing units (GPU) have greatly advanced—thanks to the popularity of video games (go play with a Nintendo Wii to see what I mean)—to render megapixels both efficiently and inexpensively. Grid computing, workstation and GPU clusters, and cluster-aware software easily crunches through data. VR suppliers keep working on increasing pixel density, improving image quality, and making it easier to hook all the parts and multiple data sources together to create a VR environment. The bottom line, says Schmidt: “They’re getting hungry for pixels out there.” 

VR is now coming full circle back to design and manufacturing. “Because you’re retaining that CAD data, you basically stay within your design and manufacturing architecture to do your VR work,” explains John Francis, vice president, industrial & military sales Motion Analysis Corporation (Santa Rosa, CA; www.motionanalysis.com). “You can do your VR analysis and plunk that data back into your design and manufacturing architecture.”

 

Adding humans to VR

Designing something is nice, explains Francis, but if a person can’t put it together or maintain it, “you’re wasting your time. If you can validate a process by factoring in the human element, [you eliminate] heavy cost penalties.” Toward that end, Motion Analysis has a software plug-in that inserts a digital mannequin into a virtual (digital) environment within Dassault’s CATIA V5 (www.3ds.com). This mannequin mimics the motions of a human being. “Instead of doing predictive studies, you’re doing actual studies. The confidence level in human factors is that much greater,” says Francis. Merging the full-body movements of a real person into a VR environment involves two technologies—plus the real, live human being to be a model. First, an array of cameras monitor the light reflected from markers on the real person. Second, software, through simple triangulation, converts the reflected light into positional data for each marker. That data drives the motions of the mannequin in the virtual environment.

Now the next step: capturing finger movements and making users literally feel they’re in a VR environment. Consider, for example, the Cyber line of products from Immersion Corporation (San Jose, CA; www.immersion.com). The base product, CyberGlove, is a glove with 22 sensors to track and capture separate hand and finger movements. New this year is a Bluetooth wireless version of CyberGlove, which replaces the physical leash to a workstation. Working with CyberGlove are a variety of products. VirtualHand for V5 is a plug-in that lets Catia users “reach in and touch your CATIA designs,” says Mike Levin, Immersion’s vice president and general manager of Touch Interface Products. VirtualHand works out-of-the-box directly with the CAD models—no import/export processes involved. (Similarly, VirtualHand for V5-DMU software works directly within Enovia DMU V5.)

CyberTouch adds tactile feedback actuators to the fingertips of the glove. Six vibration actuators—one to each finger and the palm—provide feedback so a user gets the impression of touching the virtual object. CyberGrasp is an exoskeleton that clasps to the fingertips and middle joint on each finger—that’s ten connections—on the user’s hand. Unlike CyberTouch, CyberGrasp restricts each finger as the user “grabs” a virtual object. This way the user gets the impression of actually coming in contact with an object such as a steering wheel. CyberForce is a pantograph-like force feedback system that attaches to CyberGlove. This unit provides force feedback to a user’s hand and arm. The user gets the impression of contacting an object anywhere in space.

With the combination of CyberGlove, the CyberGrasp exoskeleton, and CyberForce, explains Levin, “you can sit inside a virtual car and manipulate different objects. You can feel where the steering wheel is, you can feel the center stack, you can feel different controls, you can pick up a soda can from the cupholder. And all of this is synchronized with the images you see through a head-mounted display.”

CyberGlove costs from $10,000 to $20,000 depending on the number of sensors and desired resolution. CyberTouch is another $10,000; CyberGrasp, another $20,000. A full CyberForce system costs around $75,000.

 

Projecting a wall of VR

In VR displays, users “want to get right up to the screen and not see any pixels. They just want to see surfaces,” says Schmidt. However, merging multiple projectors so they display what looks like a seamless virtual image isn’t easy. This is where a lot of the R&D dollars in VR have gone: specialized projectors that feature color management, brightness uniformity, and geometry correction; specialized screen materials; and software to manage the proper display of visual data.

First, projectors; specifically, digital projectors. Says Jeff Blum, vice president marketing and business development for Mechdyne Corporation (St. Jacobs, Ont.; www.mechdyne.com), “The brightness [of digital projectors] is ten times that of a cathode ray tube (CRT) projector. The colors seem more vibrant. The images more stable; they don’t drift over time as with CRT projectors. Digital projectors don’t need constant adjustments like CRT projectors.” In fact, Mechdyne hasn’t installed a CRT projector in many years. (CRT projectors are still being sold. These are mostly for simulation applications, like flight simulators, that have a preponderance of fast-moving images. Digital projection, because of its pixilated nature, can’t maintain a smooth clean image across the entire display.)

New projectors based on Sony Corporation SXRD technology can display images with pixel resolutions of 4096 x 2160. These projectors are expensive, admits Blum, but if resolution is key, “it’s better to buy one projector with high resolution than to buy four or more projectors and put them together to make up that same resolution.”

Barco manufactures a vast line of digital projectors. One of its latest is the Galaxy NH-12, a bright (12,000 lumens), 1920 x 1080-pixels, high-definition, stereoscopic projector. Texas Instruments’ digital light processing technology enables the projector to handle the frame rates necessary for stereoscopic projection and viewing. It’s a networked projector that can be controlled with a keyboard and mouse on a Microsoft Windows workstation, including the selection of predefined layouts and data sources. The NH-12 can display data in multiple windows that can be resized, dragged, and made to overlap anywhere on the projected screen. It can display multiple windows of mono and stereo data at the same time, create a continuous image without blurry overlap where projections converge, and accurately project images from different angles and across spherical or curved surfaces.

 

VR data management

Another crucial aspect of VR is simply the management of data, projectors, and projection, which isn’t simple at all. The Barco XDS-1000 software manages multiple internal and external data sources for powering large, multi-channel, high-resolution display walls. The software has a Microsoft Windows XP interface so that users can share, visualize, and edit any number of application or source windows, whether locally stored or networked. XDS-1000 replaces manually switching between multiple workstations, laptops, CD/DVD players, projectors, displays, and other input/output devices with a software-based control panel. Besides being time consuming, such manually switching affects signal timings and other things. Now instead, XDS-1000 automatically stretches the source image data to match a display that typically requires far more pixels than available from the native dataset. The upshot is improved display quality and a smoother virtual experience.

Conduit, a graphics distribution middleware software application from Mechdyne, addresses a different problem in displaying VR. Not all design and engineering software applications, such as CAD, CAE, and visualization tools, can work across computer clusters made up of multiple workstations, GPUs, CDs and DVDs, videoconferencing systems, and so on. Conduit enables mainstream applications such as AliasStudio 13, Autodesk Maya, Autodesk 3ds Max, CATIA, Pro/Engineer, and SolidWorks to be cluster-compatible. Conduit intercepts OpenGL output and duplicates the flow of 3D commands and feeds those data to a multi-GPU machine or to a cluster of computers. Conduit also reorients its output to match a desired display configuration. The software replaces having to convert solids models, CAE, and other visualization data to a third-party viewing application. Such conversion often resulted in a loss in data integrity or image quality, or both.

 

Evaluating VR systems

In the final analysis, the quality of VR is based on how well it processes 3D data. Explains Francis, “Many competitors will talk about sensor resolution, camera resolution, megapixels, frame rates. That all applies to consumer electronics. Those are all 2D characteristics. 2D data is not what you pass to the contextual application [such as CAD or process simulation software]. The human eye can not discern beyond 35 frames/second [35 Hz], so what does capturing at 10,000 hertz mean? It means data overload. You’re collecting data you can’t process. What really counts is how well the 3D data is processed and what that connectivity is [from the software that processes 3D data] to a contextual application.” In short, says Levin, selecting VR systems “usually comes down to resolution and the update rates. The higher the resolution and update rates, the smoother the virtual motion, the less the flicker, the fewer inaccuracies.”

Concludes Blum, “Customers are not concerned with the projector or how it’s all connected. Their concern is about the data that’s on the screen. They’re looking for more intuitive analysis of their data.”