Cloud Computing vs. Grid Computing - What is the Difference?
Cloud computing and grid computing are two relatively new concepts in the field of computing. They are often mistaken for the same thing, however that is not the case at all.
Both grid and cloud computing are networks which abstract processing tasks. Abstraction masks the actual complex processes taking place within a system, and presents a user with a simplified interface with which they can interact easily. The idea is to be able to make the system more user-friendly whilst retaining all the benefits of more complicated processes.
Although there is a difference in the fundamental concepts of grid and cloud computing that does not necessarily mean that they are mutually exclusive; it is quite feasible to have a cloud within a computational grid, as it is possible to have a computational grid as a part of a cloud. They can also be the same network, merely represented in two different ways.
Advantages of Distributed Computing
Distributed computing, as one can imagine, is where the computing elements of a network are spread over a large geographical area. Both cloud and grid computing are prime examples of distributed computing architectures.
The main advantage of this sort of environment is the ability to tap into multiple areas of expertise, using a single resource. For example, in a cloud computing environment, there are often multiple servers which can each perform a single task excellently. Using the cloud gives a user access to all these servers through one interface.
Computing elements come with their own set of requirements, like appropriate storage, physical security and regular maintenance, among other things. In a distributed computing environment, since the elements are spread out, these cost heads are also distributed accordingly.
There are many architectures for distributed computing environments; the focus of this article is on cloud computing and grid computing.
Cloud computing is an extension of the object-oriented programming concept of abstraction. Abstraction, as explained earlier, removes the complex working details from visibility. All that is visible is an interface, which receives inputs and provides outputs. How these outputs are computed is completely hidden.
For example, a car driver knows that a steering wheel with turn the car in the direction they want to go; or that pressing the accelerator will cause the car to speed up. The driver is usually unconcerned about how the directions of the steering wheel and the accelerator pedal are translated into the actual motion of the car. Therefore, these details are abstracted from the driver.
A cloud is similar; it applies the concept of abstraction in a physical computing environment, by hiding the true processes from a user. In a cloud computing environment, data can exist on multiple servers, details of network connections are hidden and the user is none the wiser. In fact, cloud computing is so named because a cloud is often used to depict inexact knowledge of inner workings.
Cloud computing derives heavily from the Unix paradigm of having multiple elements, each excellent at one particular task, rather than have one massive element which isn’t as good.
Grid computing harnesses the idle processing power of various computing units, and uses that processing power to compute one job. The job itself is controlled by one main computer, and is broken down into multiple tasks which can be executed simultaneously on different machines. These tasks needn’t be mutually exclusive, although that is the ideal scenario. As the tasks complete on various computing units, the results are sent back to the controlling unit, which then collates them forming a cohesive output.
The advantage of grid computing is two-fold: firstly, unused processing power is effectively used, maximizing available resources and, secondly, the time taken to complete the large job is significantly reduced.
For a job to be suited to grid computing, the code needs to be parallelized. Ideally the source code should be restructured to create tasks that are as mutually exclusive as possible. That is not to say that they cannot be interdependent, however messages sent between tasks increase the time factor. An important consideration when creating a grid computing job is that whether the code is executed serially or as parallel tasks, the outcome of both must always be equal under every circumstance.
Cloud Computing vs. Grid Computing
The difference between grid computing and cloud computing is hard to grasp because they are not always mutually exclusive. In fact, they are both used to economize computing by maximising existing resources. Additionally, both architectures use abstraction extensively, and both have distinct elements which interact with each other.
However, the difference between the two lies in the way the tasks are computed in each respective environment. In a computational grid, one large job is divided into many small portions and executed on multiple machines. This characteristic is fundamental to a grid; not so in a cloud.
The computing cloud is intended to allow the user to avail of various services without investing in the underlying architecture. While grid computing also offers a similar facility for computing power, cloud computing isn’t restricted to just that. A cloud can offer many different services, from web hosting, right down to word processing. In fact, a computing cloud can combine services to present a user with a homogenous optimized result.
There are many computing architectures often mistaken for each other because of certain shared characteristics. Again, each architecture is not mutually exclusive, however they are indeed distinct conceptually.