A “graphics processing unit” (GPU) sounds like something that would be used to process graphics. And that is true.
Chances are, says Danny Shapiro, director of marketing, NVIDIA (nvidia.com) GPUs are used in the hardware that are in your company’s design department. What’s more, they’re probably part of that gaming system that you have at home. Go to the doctor and have a scan: part of the medical imaging.
Yes, they’re good for image processing. Or, as Shapiro says with a more technical term: “parallel computation.” In addition to a suite of physical simulations, the GPUs are used for a variety of other applications. “Wall Street uses parallel processing for options pricing. You can run 10 to 100 times faster on a bank of GPUs than CPUs,” he says. That’s “CPU” as in “central processing unit.” You know that “Intel Inside” sticker on your computer? That’s for the CPU.
The GPU was invented by NVIDIA back in 1999, and has subsequently been working on the ways and means of making it easier to program the processors, which have hundreds of processor cores running in parallel.
So what’s this have to do with anything beyond designing parts or cars, simulating metal flow or factory floors?
Plenty. And it comes down to that parallel processing that Shapiro mentions. What this means, in effect, is that whereas a CPU is good at handling serial tasks—one thing after another, in sequence—a GPU excels at doing tasks that can be “broken into little chunks and processed simultaneously,” or “parallelized.” (“For a system to be most efficient, they need a combination of a GPU and a CPU,” Shapiro says. For example, there is the Tegra 2 processor from NVIDIA that integrates a dual-core CPU and a GPU inside the same chip. This means that tasks better suited to the GPU, like graphics processing, can be offloaded from the CPU. Note well that this is a “system on a chip” (SoC), not a CPU here on a motherboard and a GPU over there. Integration is important for improved performance.) As a result of this granulation of the bigger task into smaller pieces, the GPU can operate at speeds orders of magnitude greater than a CPU.
Consider, for example, vision-based lane departure warning and collision-avoidance systems. Here it is a matter of doing some intensive image processing, with information at 30 frames per second being fed into the system. While it can be handled by a CPU, the issue is that to do the processing at a sufficiently high rate to get the job done, there needs to be a big CPU or the existing CPU has to be run at a higher clock speed. In both cases, this means that there is a greater power draw required. As OEMs are looking to minimize the amount of power draw, this can be problematic.
The Tegra SoC, on the other hand, is designed for low power consumption: in fact, they are now being widely deployed in smart phones and tablet computers because they have limited appetites for power. So given that there is the blazingly fast processing power and low power draw, GPUs have an obvious advantage in these applications.
Shapiro talks about the possibility of a vehicle that offers pedestrian protection or 360° surround view for parking and lane departure warning. “This could be done with the same processor”—“same” as in a single SoC. First of all, he points out that both applications are “problems with massive amounts of data.” Take the 360° view. This involves four camera systems feeding information. So the processor has to de-warp the data (as it is coming from various points of view), blend it, then overlay a rendered vehicle on top of the image for viewing on the display screen in the vehicle. “The GPU can handle all of this with no problem.” As the 360° view function isn’t working when driving on the freeway, then different algorithms can be run for the lane departure warning system. The GPU provides the capability to perform computer vision, or the ability to analyze video feeds in real-time, and determine the content of the video stream. This lends itself to developing collision avoidance systems.
Voice recognition is another application that the GPU works well in (despite it being audible, not graphical—data is data, after all). While some companies are pursuing a cloud-based approach voice, Shapiro thinks that on-board speech-recognition makes more sense because of the very real possibility of dead spots (e.g., in parking garages) or simple lack of connectivity (e.g., in vast stretches of the so-called “fly-over” states). So a GPU could be used as the heart of an infotainment system. And because of the amount of compute power that it has and because of the ability to run different programs, it the case of a potential emergency situation—say the collision detection warning system has been activated—it is possible for the infotainment GPU to be drafted by the safety GPU to provide even more compute power.
NVIDIA, Shapiro says, has been working with car makers including Audi, Lamborghini, BMW, and Tesla in using GPUs. And, yes, he acknowledges that these are not necessarily the biggest volume firms. Yet part of the reason why NVIDIA is working closely with them is because (1) they are technology focused and (2) they tend to be more nimble and responsive when it comes to deploying new technology.
He points out, however, that consumer expectations are such that the mainstream domestics are going to have to give serious consideration of this high-performance processing capability: “Consumers are having beautiful experiences on their mobile devices, and these have Tegra processors inside. Drivers are going to demand that experience in their vehicles, as well.”
A couple of considerations that make these systems on a chip practical for use in a whole range of vehicles:
1. Volume. Given that they’re used in tablets and smart phones, gaming systems and design hardware, huge numbers are being produced. Auto makers can take advantage of economies of scale.
2. Updates. Software modifications can allow the system to stay up to date. As Shapiro notes, “Because cars have such a long life span, especially compared with phones and tablets, given the high performance capability of a Tegra processor it is possible to stay current through software updates and apps for a rich experience for many years to come.”