Pin Me

Graphics Processor Evolution: Pipeline to Unified Shader Architecture

written by: •edited by: J. F. Amprimoz•updated: 6/6/2009

We’re hearing a lot about GPGPU computing, and Intel getting into the graphics fray with a many-core design, called Larrabee, that uses modified CPUs. We look at the move from a fixed graphics pipeline to one with programmable, then universal, shaders.

  • slide 1 of 6

    Until recently, the process of generating computer graphics was referred to as the graphics pipeline. But that just wasn’t cutting it for sophisticated effects like water and smoke. Overtime, the process has been taken over by more flexible shaders, and now uses universal shaders able to perform quite a few tasks. We explain this change.

  • slide 2 of 6

    So What’s Wrong with the Graphics Pipeline?

    Well, it’s a pipeline. You can’t send something back for a change if a later stage reveals the need to. A good example of this is how games through the ‘90s had a lot of trouble representing transparency. Things you were supposed to be able to see through, like shallow water or smoke, tended to look solid, or flicker in and out.

    Once raterization decides something is behind something else, the thing in back is pretty much gone. You can’t get to the texturing part and say “Hold on, this thing is smoke, you should be able to see stuff through it."

    So a person standing in shallow water would have their legs ‘behind’ the water, and you wouldn’t see them at all. A plane flying though a cloud would look like the cloud had a hard edge and the plane was coming out of a cloud-shaped gate from another dimension.

    The trick used was to animate in the opaque surface, using some frames the opaque water or smoke, and some with what should be seen through it. If it didn’t come off just right, it would create a flickering effect. A more sophisticated trick was to make smoke a bunch of dots, and you could still see the background depending on how dense the dots were.

  • slide 3 of 6

    What Else is Wrong with a Pipeline?

    We mentioned that a pipeline is like an assembly line. Like an assembly line, it only works efficiently if you have enough of the right machinery and enough people who can run it at each step of the process. Otherwise, you end up with some production capacity sitting around idle.

    Similarly, if the graphics pipeline’s hardware isn’t matched perfectly to the processing needs of the task, some of it sits idle. And since the images that need to be displayed are wildly different, the match is never perfect.

  • slide 4 of 6

    The Solution: Programmable Shaders

    This is a transition that, like most, did not happen overnight. In order to provide more sophisticated graphics to users, manufacturers started making the fixed function hardware at each stage of the pipeline more flexible. Some of them became known as shaders, and they eventually became flexible enough to overcome most of the difficulties caused by a linear pipeline.

    Unfortunately, the shaders fell into three categories, vaguely corresponding to steps of the graphics pipeline. Vertex shaders would construct the 3D model and light the vectors making it up. Geometry shaders would make the lines into surfaces. Pixel shaders would apply the textures and other effects. One shader could only do one type of task.

    That meant the other problem with a pipeline, having part of it doing nothing most of the time, was still there. Two further steps get us to the current state of affairs and fix that problem.

  • slide 5 of 6

    Universal Shader Architecture

    Most graphics hardware currently uses DirectX to communicate with the applications being run. It is an Application Programming Interface, or API, that programmers use to get their software to use hardware effectively. Microsoft tweaked it over time, and Direct X 10 implemented a unified shader instruction set.

    That means that software for different kinds of shaders could be written in a more similar manner, making the programmer’s job easier. In an uncommon piece of hardware and software changing at the same time to benefit from the changes in the other, ATI and Nvidia both started making GPUs with unified shaders.

    Since the three kinds of shader had to understand the same instructions any way, they are now made so that they are no longer confined to a certain task. There are no more vertex, geometry, and pixel shaders: just shaders. A unified shader can do any of the three kinds of work, so it can do whatever needs doing instead of waiting for work it can do to come in.

  • slide 6 of 6

    What’s Next?

    Intel hopes it is Larrabee, which, instead of using a GPU made up of lots of SIMDs, will be a GPU made up of a bunch of CPUs, each modified for more SIMD threads and to understand graphics related instructions.

    There is a lot of hype to Larrabee and using lots of simple CPUs in parallel as a GPU. But the fact of the matter is that: unified shaders have made GPU SIMD’s more flexible, and the CPUs on Larrabbe are so SIMD oriented that the difference may not actually be that big.