LEARN MORE

Zones



Power From the Masses With Grid Computing

A new class of software breaks through the bottleneck of limited compute resources—by using available compute resources.

Now that cost cutting has hit Information Services, how are design and engineering to crunch through their ever-increasing compute-intensive jobs without the budget to buy a supercomputer? This is especially a problem for those big number-crunching applications that can consume all the compute power you can muster, such as finite element analysis (FEA), computation fluid dynamics (CFD), design-of-experiment simulations, and crash analysis—of an entire car.

Two words: grid computing.

Grid computing is like the national electric grid: A valuable resource somewhere in a pool of such resources is used as needed for a particular purpose. In this case, computing power, not electricity, is the utility. And instead of electric generators, it’s idle, relatively idle, or overpowered workstations and servers that are combined into a single, federated computing grid.

In operation, grid computing software divides a computational job into multiple, smaller, discrete tasks that are then distributed to the various computing nodes on the grid. (FYI. First, not all computational jobs can be broken up into small, independent tasks. Second, grid software handles some fairly major security issues regarding distributed computing. Last, the central processing unit [CPU] is not all that’s enlisted. Enlisted computers can also “lend” data and data storage space.)

As individual computing tasks finish, the computing nodes send their results back to the grid server. The server, in turn, compiles the results and, if necessary, sends new tasks to the computing nodes. When a job is complete, the results are made available as in any conventional client/server environment.

Grid computing works. Ford Motor’s Engine and Transmission Groups recently bought approximately 500 dual-processor Sun Blade workstations from Sun Microsystems, Inc. (Palo Alto, CA) for its designers and engineers. The dual-processor configuration might seem overkill for some individual users; however, the cost of the workstations was incrementally less expensive than two single-processor workstations. In fact, these workstations were bought specifically with grid computing in mind. During the day, one of the processors is allocated for the user’s local computer-aided design job, while the second processor is available for grid-based mechanical computer-aided engineering (MCAE) jobs. When users leave for the day, Ford effectively has 1,000 CPUs to run MCAE batch jobs in the grid.

Ford has reportedly saved as much as $100 million—basically the difference it would have paid in buying and housing a $150-million supercomputer.

Likewise, since May 1999, Saab Automobile AB has been using Sun’s free ONE Grid Engine to create a pool of 100 Sun workstations for external aerodynamics and other CFD simulations. Now Saab uses this grid as a virtual supercomputer—24 hours a day, every day.

Sun is not the only hardware or software supplier offering grid computing technology. However, the Sun ONE Grid Engine reportedly runs over 7,000 grids with an average of 47 CPUs per grid. The software is free (download from www.Sun.com). The commercial version begins at $20,000. Sun ONE Grid Engine, natch, runs on Solaris and Linux, the two operating systems Sun sells; however, open-source versions are now available for almost all Unix flavors, as well as Mac OS X.

Nice isn’t it? Not only making do with what you have, but making out better, faster, and inexpensively. More power to you.