Today's graphics cards can offer unparalleled performance, and near photorealistic visuals. Just compare games from a decade ago to Crysis, or STALKER: Clear Sky, and you will find the difference almost unbelievable. How can improvements in video hardware raise the bar even higher?
The video card has been around as long as home PCs, and it's traditional purpose has been to render and output images to the monitor. It wasn't until the mid 1990s, when video cards were integrated with 3D accelerators, that they became much more versatile. Today, the video card is one of the most complex components in a PC, and they are taking on more roles as time goes on.
GPU accelerated physics has been a promise by video card developers for years. Because of the highly-parallel nature of physics calculations, the many simple processors in a graphics card are able to perform these calculations faster than a CPU can. Very little progress was made in this area, however, until Nvidia bought Ageia and it's Physx API, which makes it easier for game developers to integrate realistic physics into their games. When Nvidia bought Ageia, they developed a way for Physx to run on the video card instead of needing a separate physics processor. Because only PCs with Nvidia graphics cards are able to run Physx, many game developers are not willing to support it. This has been one of the major stumbling blocks for GPU-accelerated physics, but there is still hope thanks to DirectX 11.
Microsoft's DirectX 11 will bring the "compute shader" which basically turns the graphics card's dedicated video hardware into general purpose processors. These processors can be used to perform many types of calculations, including physics. Right now, the video hardware is used almost exclusively in 3D rendering, but with compute shader almost any task becomes possible to perform on the GPU.
DirectX 11's compute shader (and OpenCL, which is a similar API) bring a whole new world of function to the GPU. Not only can they be used to improve physics and AI in video games, but they can do anything from performing scientific calculations to encoding high definition video. Many research facilities have begun using clusters of video cards instead of supercomputers in their research, because GPUs are cheaper and much more powerful in highly parallel applications. New GPUs are being optimized to better perform these GPGPU (General purpose GPU) tasks, and more and more functions that were once exclusive to the CPU can be performed on video cards.
We may see a future where the GPU does most of the work in a PC, with the CPU only serving to tell the GPU what it should be calculating. Intel's own upcoming video card, Larabee, is essentially nothing more than a cluster of CPUs on a PCI-express card. AMD is working on CPU-GPU fusions of it's own, so the line between the two processors is becoming more blurred as time goes on.
The Future of...
Taking a look at what the future may hold for us as Memory, Motherboards, and GPUs develop with advancements in technological precision.
- The Future of Motherboards
- The Future of Memory
- The Future of Graphics Cards
- The Future of GPUs: The (C)GPU