What is Grid Computing and How Does It Work?

What is Grid Computing and How Does It Work?
Page content

Introduction

Most people are familiar with the concept of a power grid, where various sources of electricity are linked together to supply power to a certain geographical location. The concept of grid computing is very similar, where computers are linked together in a grid to provide a greater computational resource.

Grid computing is an arrangement of computers, connected by a network, where unused processing power on all the machines is harnessed to complete tasks more efficiently. Tasks are distributed amongst the machines, and the results are collected to form a conclusion. The advantage of grid computing is that it reduces the time taken to complete tasks, without increasing costs.

Computers on a grid are not necessarily in the same geographical location, and can be spread out over multiple countries and organizations, or even belong to individuals.

What is Grid Computing?

Computers today have great processing power, even on the lowliest of machines. During an average working day, most of this computational potential lies unutilized by a user. The standard tasks for an average user’s computer vary very little, usually including word processing, Internet browsing, spreadsheets and presentations. These tasks use a small percentage of the processor’s potential. The rest is idle, wasting a resource which could otherwise be harnessed.

In a grid computing environment, computers are linked together, so that a task on one machine could utilize the unused processing power on another machine to execute their tasks faster. This arrangement minimizes wasted resources and increases efficiency considerably, as a task split over multiple machines takes significantly less time to compute.

Serial Computing vs. Parallel Computing

Each processor uses a queue system to execute tasks. There are many algorithms implemented in the system, but, in essence, there is a task queue. A single processor can handle only one task at a time and, as a result, programming of software has evolved to execute each task sequentially. For example, if task X needs to be executed before task Y, the programmer will ensure that order is maintained in the program. This is known as serial computing.

Even though sequence plays an important role in computing, there are certain tasks that are mutually exclusive, and can therefore be performed simultaneously. If two tasks can be performed independently of each other, they can they be assigned to two different machines. Each machine will perform the task it was assigned, generating results substantially quicker than if one machine was performing both tasks one after the other. This is known as parallel computing, and it is vital to the successful implementation of a grid.

How Grid Computing Works

If a machine on a computing grid has a large task to be performed, the program must first be parallelized. The flow of the program needs to be analyzed and each module is separated. The modules are then arranged to illustrate which ones can be executed independently. Those modules are then sent to different machines for execution. The results are then resent to the original machine, where they are amalgamated into one whole.