There are multitudes of computers around us, some of which we are in contact with on a daily basis and others which are types we only hear about. In very general terminology, computers are either large or small, non personal, or for personal use. Here is a quick look at termonology used for hardware for the larger - and largest - computers.
Computers can be split into two general kinds, but the average user is only going to come into contact with one of them- the small or personal computer. This article will look at the ’non’ personal computers- the types of big computers. This kind includes, at the low end, the workstation, which generally is used by one person, at their job. Beyond workstations, there are minicomputers, dedicated server machines, mainframes, supercomputers, and a few in between.
A workstation can be a self contained computer with hardware modifications which are related to the work it is used for, and meant for use by a specialist of some type, or a station where a worker may have only the minimum of a keyboard and display, which are connected to a larger computer which does all the processing requested by the user. In that case, the larger computer would probably be running a virtual computer specifically for that workstation. A workstation can network to other workstations.
Workstations are often used in business and research environments, and, at least in the past, tended to be more powerful than the average personal computer. With the introduction of quad core systems for the consumer, this distinction has blurred considerably. Some home computers are more powerful than older business workstations, because they are using newer components with more capability.
More powerful than the workstation is a computer sometimes known as a minicomputer. It is also now known as a mid range computer, occupying a space between the personal computer and the mainframe. Their size is also in between the two types. Minicomputers once were the size of a refrigerator, or a couple refrigerators, but now are often smaller than a washing machine. Minicomputers are multiuser computers, which might be used in a medium sized business or a division of a larger business. While some are still made by Sun, HP and IBM, they are now a branch on the computer development tree which will not develop further.
However, many things we now take for granted were originally developed for minicomputers, including multitasking, networking and the Operating Systems that turned into the versions of Unix, Linux and Windows used today. Hardware features once only found on minicomputers are now found on home PCs. Early minicomputers- in fact all early computers, no matter how large, did not have the memory storage available now of the very cheapest of budget home computers. They were lumbering, small brained dinosaurs, but the ancestors of what we use today. Evolution in the computer industry is a speedy microcosm of what it was on the Earth.
What is now referred to as a server is really two different types of computers which fill the same role. Servers literally serve other computers, from the server created when you configure your laptop into a mobile hot spot to the machines specially built to run programs across a network. They have hardware to keep running constantly, even if parts need to be swapped out or components changed. This is provided by redundancy. There are literally farms of servers dedicated to running Google, making sure that wherever in the world someone is querying Google, they will get their results, often in under a second. Problems in one location or machine do not affect the overall functionality of the service provided. Servers can be single dedicated computers, or networks of smaller units, including networked PCs. They may not even have display units.
Servers store information, and provide it when requested. The Internet is built of web pages running on millions of servers. Most of these server are dedicated to that use. A server can be the host to multiple websites. Bright Hub exists on a server, and all of the information that makes up Bright Hub is also on another server in a geographically different location as well, in case the ice storms that frequent upstate New York in the winter months take out the electrical lines leading to its local server. In actuality, it would take more than electrical lines going down to cause all of Bright Hub traffic to go to another server location, because servers generally have numerous backup systems in place to assure continuous service, including generators.
Grid computing is an application of cluster computing. David Bader, in his paper, available as a pdf download, Cluster Computing: Applications, in The International Journal of High Performance Computing, 15(2):181-185, May 2001 , says
“The performance of an individual processor used in a high-end personal workstation rivals that of a processor in a high-end supercomputer, such as an SGI Origin, and the performance of the commodity processors is improving rapidly.”
Written in 2001, Bader is talking about state of the art computers of the time, used in clusters. The strides made in personal computers since mean the clustering he discusses in the paper used with the computers of today make possible a degree of computing power he does not imagine.
Grid computing is the use of many personal computers/workstations to solve distributed problems. Each individual computer, or node, on the grid receives a packet of information to process when the computer is not otherwise active. It actually does not matter how advanced or high-end the particular computer is, as long as it can process the project it is given. No other node is directly connected to that PC, so it does not need to have any synchronization with any other node. All information is sent back to a central computer, and it sends out a new task when a node completes a task.
Several grid computing projects have the accumulated processing power of computers in the top500 list of supercomputers. It is estimated that nearly three million home computers around the world are working on the SETI@home project. While this does not come near the speed of a top level supercomputer, or mainframe, it is the astonishing number of tasks that these networks can handle which makes them remarkable. It also is a way for personal computer users to participate in some of the most advanced scientific work on the planet, simply by allowing access to their computer when they are not using it. Other large grid computing projects are working on Alzheimer’s disease and on information from the Large Hadron Collider.
A mainframe is what people usually think of when they think of the computers that were around before the era of personal computing. Mainframes share certain characteristics with servers, such as their ability to store data redundantly and their ability to keep running even though part swapping is happening. There are mainframes that have been running without failure for more than ten years. Mainframes have lots and lots of power. They can run many many operations at once, such as all the financial transactions happening at all Bank of America locations. In fact, it is primarily governments and corporations needing to answer high volumes of requests that use mainframes now.
Mainframes started out as room sized machines, and now are as small or smaller than refrigerators, but they have some specific requirements to keep them in good working order. They need a dust free temperature controlled environment. They do not, however, have the high electrical consumption that characterized old mainframes, and may use much less energy than the “server farms.” A few new mainframes are still being produced. Many mainframes still use magnetic tape to back up on.
The last big computer we will look at is the supercomputer. Supercomputers are usually built to do one thing, and do it really really fast. Most of them are purpose built, as needed, and there may often only be one in the world. Supercomputers work on such things as DNA sequencing, or cryptography. They will also be involved in the new square kilometer radio telescope, once it is built.
Their speed is measured in FLOPS, or floating point operations per second, and the newest ones are mind bogglingly fast, and measured in petaflops. This gives them a theoretical speed of 1. with 15 0s after operations per second, or a thousand trillion operations per second. A very few supercomputers are available for use for general science, but most are purpose dedicated.
In the next article, we will look at all the different types of computers lumped into the category of personal computer.
Source material and corrections for this article came from Eofn Williams, Henry Scudder, Michael Scudder, Lamar Stonecypher, Michele McDonough, James Allen Johnson, Wikipedia, Seti@home, David Bader’s paper archived by Georgia Tech, and Top500 site, which lists the top 10 supercomputers with links to additional material on all of them. A wonderful pictorial history of computers from the early 20th century through the 80s can be found at the Computer History Museum.
This post is part of the series: Computers and Hardware and Terminology
In this series we look at the basics of computer hardware, starting with the two main types of computers and then looking at definitions, examples and some lists of what comprises computer hardware. After reading this series, you will understand mainframes and personal computers, CPUs, RAM and ports