Traditional Parallel Uses for CUDA
Because parallel programming in CUDA involves similar complications to parallel programming in general, and because the hardware is suited for very parallel calculations, the applications we have seen for CUDA have been restricted to common uses for parallel computing.
High Performance Computing has used large numbers of commercial CPUs in parallel for years on big, number crunching type problems. Medical research and various kinds of modeling (financial, weather, fluid dynamics, etc.) are some of the dozens of fields that use parallel HPC. And that is where we have seen most of CUDA’s applications.
So what’s so new and great about CUDA? Well, those parallel friendly applications are far better suited to GPUs in the first place. Users can get the same results from a few GPUs compared to far more CPUs, which costs a lot less money. A Tesla from Nvidia brings HPC performance in a rack mountable or even desktop PCI-E card format.
It still costs thousands of dollars, but can do things for researchers that would normally require an order of magnitude or two greater in expense. Letting community colleges offer Ivy League computing power to their students without the price is great, and Nvidia certainly found a great market for themselves. We’re still not talking about what CUDA can do for average computers, though.