"We've really made a big change in the way we're doing the aerodynamic design," says Alex Burns, general manager, Williams Grand Prix Engineering (Grove, England), speaking of the development program undertaken to create the FW27, the vehicle that the BMW WilliamsF1 team is campaigning in the 2005 Formula One season. The biggest change is in the amount of computational fluid dynamics (CFD) deployed to create the vehicle, and which is being used on an on-going basis as the races unfold around the world.
Burns explains that although the Williams organization has been using CFD for the past several years, wind tunnel testing has been the "main aerodynamic tool." CFD was used, in effect, as a backup. "Now it is much more of a predictive tool, and many of the things that go into our tunnel"—yes, Williams has its own wind tunnel, where they run 60% scale models—"have already been run through CFD." What's been happening is that the organization has been increasing the computing capability that it has in house, working with Hewlett-Packard (www.hp.com), its principal sponsor and partner. Most recently, an HP cluster platform 4000 supercomputer system running Linux was installed at the Williams headquarters. In addition to which, they're using HP Integrity system servers, C8000 UNIX workstations, various printing and imaging products, a transportable data center running ProLiant servers, OpenView storage systems, and more. In addition to which, HP operates a computing facility in nearby Bristol that includes a cluster of 96 64-bit 1.3-Ghz Itanium processors and a Quadrics interconnect that WilliamsF1 deployed, as well, which, in effect was another "virtual" wind tunnel available for use.
IMPROVING OUTPUT. OK. They have a lot of gear that's used both internally and at trackside. What does this mean from a functional point of view? Speaking of what happens in development, Burn replies that the technology roadmap that they'd devised at Williams would have had them at a point where engineers could input a model at the end of the day and come back the next morning and obtain the results. They had been needing on the order of 24 to 36 hours to receive the results. "This cluster has not only gotten us there, but we can actually run two full car cases in parallel overnight." He adds, "Taking a day out of our design iteration timescales is very important to us."
One more thing about the models they're solving: "In the last two years we've increased our model size about five fold and we've decreased the time to solve that by about eight fold." More done faster. To put it mildly.
And to get a sense of those models, Dr. Tim Bush, Hewlett-Packard's High Performance Technical Computing Manager for Europe, Middle East and Africa, who works with Burns and his colleagues, says that compared with conventional vehicle manufacturers, the models being processed at Williams "are probably four to five times bigger than the very largest models being done at the moment."
"Time compression is everything to us, really," says Burns. "Not only the speed of the car around the track, but the speed with which we go through any process. Whether it's a design process or a manufacturing process or a decision-making process, we're always looking for time compression." Another thing that the CFD and cluster is doing is permitting some parts to go straight from CFD to product release, skipping the wind tunnel testing.
Williams doesn't produce the engines. The BMW P84/5 V10 engine comes from BMW's engine development center in Munich. Burns says that during development, information is being sent from Germany to England so that there is the necessary information for creating the chassis.
CRASH. They're also running predictive crash tests, using a dynamic finite element analysis approach. The Formula One governing body, the Fédération International de l'Automobile (FIA), has specific required crash tests. Burns says that they're running crash simulations of the carbon fiber structures, and creating energy-absorption plots that permit them to design parts; the correlation between the predictive modeling and the actual tests has proven to be good. "If we weren't modeling," Burns says, "we'd have to do an awful lot of experimental crashes in order to find the optimum balance between energy absorption and weight reduction." There are two cited benefits: "The simulations are giving us information much quicker because we don't have to physically build the parts and go and test them"—at least not until they are conducting the "official" tests—"and it's much less costly because we're not smashing so many parts." Carbon fiber components are rather dear. To put it mildly.
BEYOND F1. So what about HP? Does the company achieve anything beyond a good customer and the ability to have its logo plastered on cars and signs and Nomex driving suits? Yes, it achieves learning that it believes can be deployed among its more conventional customers, such as those in the comparatively mundane auto industry (let's face it: even the most exotic mass production car doesn't have the caché of F1). Dr. Tim Bush notes, " We at HP see F1 as a microcosm of what's going on in the ‘real' world." He explains, for example, that just as Williams is deploying crash test analyses prior to component/vehicle build, large aerospace manufacturers are depending on these analyses in order to maintain economic feasibility for aircraft programs. Just as vehicle manufacturers are driving their product development times down, Bush reckons that by the time the rules for the ‘05 were established, there was five months to develop and produce the vehicle.
One might imagine that once the FW27 was done it was, well, done. Not so. Burns says: "That car will be different at every race. For every race we will bring through an aerodynamic improvement—even when the races are just one-week apart. This is why we look at time compression as so important." There are 19 races scheduled for the 2005 season, more than ever before, and yet the season is actually one week shorter than has been the case (from March 13 to October 16). They're using the OpenView Storage mirroring system and a wireless local area network setup to transfer information by beaming information from the various tracks and the Grove HQ so accurate information is at hand, pronto. Speed—and accuracy—are everything.