Intel and AMD have taken us to a market of multi-core processors. But getting these processors to use all those cores requires the development of more parallel software. What is easy to overlook is there is already a highly parallel processor, running thousands of threads at a time, in normal desk and laptops: the GPU.
Long before CUDA and GPGPU became talking points, and before normal users gave much thought to if their software was parallel enough to use all of the CPU, graphics processing was thought of in terms of a pipeline. Performance was increased by widening the pipeline, or more accurately, adding more identical, parallel, pipelines.
Some clever programmers and engineers saw all these commercially available, incredibly parallel chips, and realized that where the problem leant itself to parallelization in a way similar to computer graphics, there should be a way to trick a GPU into doing the work a CPU usually does… and doing it faster or for less money.
And GPGPU computing was born. Well, maybe it’s more appropriate to say that the aforementioned clever people had a GPGPU bun in the oven or a twinkle in the eye. There was a long way to go before you and I would start getting our hot little hands on it. But before we get into how GPGPU is finally reaching the mainstream, many of you are desperate to know: