How Graphics Cards Work? What is a Vector and What are Wireframes?

How Graphics Cards Work? What is a Vector and What are Wireframes?
Page content

How Computer Graphics Processors Work

Before we get into the changes in graphics processing that allow for GPGPU (General Purpose on the Graphics Processing Unit) computing, and have given Intel the notion to use many heavily modified CPU cores in parallel as a graphics processor, lets look at some general points in computer graphics, and how they have been dealt with by hardware in the past.

One concept that will appear through the rest of the conversation is using vectors to construct wireframes. The CPU hands graphic assets off to the GPU in the form of huge matrices full of vectors. Remembering some high-school algebra, we know a vector has a direction and a length, which is enough for us to draw a line. By drawing (in memory on the graphics card, we are nowhere near drawing on the screen yet) all of the vectors in the correct position, the graphics card creates a 3D model of the area and objects in it. It is called a wireframe, since it is made up by a bunch of lines, and almost looks like someone built a model with hundreds of thousands of straight bits of wire.

It takes so many lines because representing a curved line takes many short straight lines, and a curved surface requires even more flat surfaces to approximate. A cube can be created with 12 vertices (edges); representing a sphere with straight lines takes having so many little lines that your eyes see a curved surface.

Why Not Just Draw a Curve?

The wires are straight because computers can work with straight lines far more easily than curves, which makes sense: the math for drawing a curve by hand is similarly more complicated than what it takes for a straight line. A straight line, which can be represented as a vector, can also be represented with a simple linear equation, like y=2x, 2< x <5.

A curve needs you to square or take a root of something, like y=x^2, 2<x<5, and you have to do it at so many points a long the line to that you essentially are calculating a whole bunch of straight lines. And the work our brains do to smooth out the curve as we draw point to point involves tremendous complexity if you try to express it mathematically (curve-linear approximation is university, not high school, math).

Of course, a straight line only needs two points. A computer can churn out enough straight lines to make your eye see a curve much faster than it can actually churn out the curve.

Vectors and SIMD Go Hand In Hand

Now we get back to the realm of computing. A graphics processor uses dozens of SIMD (Single Instruction, Multiple Data) processors. The simplest processors are SISD, or Single Instruction, Single Data. The latter take one thing out of memory, do something to it, and put it back.

SIMD, in contrast, can take a list of things out of memory, do something to them, and put them back. If you think about vectors, that’s perfect. Want to make something bigger as it gets closer to you? Each vector is a list of memory addresses, so you load that list and multiply it by some number bigger than one. An SISD would have to get each part of the vector from its memory address, multiply it, and put it back. Applying a scalar multiple to a 3D vertex would take SISD 9 steps, while SIMD can do it in 3.

On the surface that looks like a nice, three-fold reduction, but it’s more significant than that. As the 3D model works its way towards the final picture onscreen, you eventually get to a point where each pixel has a massive vector tied to it.

The pixel needs to know where it goes (xy co ordinates), what colour it is, how well lit it is, is it behind something transparent, how transparent, what colour is the transparent thing, is it antialiased, if so with what colour, and so on. The benefits of SIMD are obviously bigger if we consider how long these vectors can get.

Why Not Multi Instruction Multi Data?

If SIMD is better than SISD, wouldn’t something that can do Multiple Instructions on Multiple Data be best? MIMD is indeed better in some cases, actually most. Parallel SIMD setups, similar to a modern GPU touting GPGPU benefits, started off in supercomputers in the ‘80s. They were largely replaced by MIMD, and a modern PC CPU is a combination of MIMD and SIMD processors. Current GPGPU efforts are largely a new take on an old idea from the other direction

The reason that SIMD and MIMD are used together on CPUs, and the former almost alone on GPUs, is MIMD processors are more complex, as one would guess. That means that for a given size of processor die, you can jam on more SIMDs than you could MIMDs, and for less money. A CPU combines them to offer the flexibility of MIMD while controlling cost and size by using SIMD where appropriate.

Since, as we have spent most of this article explaining how important vectors are to computer graphics and how nicely they work with SIMD, it should come as no surprise that they are found in droves on GPUs. Because almost everything the GPU does involves vectors, while a CPU has all kinds of calculations to do, you can get far better performance for a given cost, wattage, and die size by loading it up with SIMDs than with larger, more sophisticated, MIMDs.

Next: The Graphics Pipeline Becomes More Like a Dry Dock

We now have a good idea of how a computer takes numbers to construct a 3D, wire-frame model of an area, how this depends heavily on vectors, and how SIMD processors are great for vector, thus graphics processing. We still haven’t figured out how this 3D model becomes a pretty image on your 2D screen though.

The process: starting with constructing the wire model, and ending with outputting a frame’s worth of pixels at a time; used to be referred to as a graphics pipeline. Actually, many people still call it that, but current graphics cards don’t really do things in a linear, start to finish, way anymore. We discuss this in the Graphics Pipeline: How A Computer Makes an Image