SCSI and SAS
SCSI (pronounced “scuzzy”) stands for Small Computer System Interface, but much like most acronyms, knowing the words that make it up doesn’t give you any insight into what the technology actually does. For years, SCSI has been the chosen storage device type for servers. SCSI actually refers to the interface that is used between the server and the storage device, but most people refer to a drive with a SCSI interface as simply a SCSI drive.
SCSI was initially adopted as a standard interface for server devices because of its ability to connect multiple devices (up to 16) through one interface. The technology was improved upon over the years, leading to faster speeds, larger buses (paths that data being transferred uses). However, the traditional SCSI technology (having progressed to the SCSI-3 standard) wasn’t keeping up with the serial technologies being developed for the lowly PC systems, like SATA, which boasts a 3Gb/sec transfer rate.
Enter SAS, which is actually an acronym that contains an acronym, standing for Serial Attached SCSI. In 2009, the SAS standard will be released allowing for transfer rates of up to 6Gb/sec - two times that of comparable PC components. This will allow the fast accessing, copying, and transferring of data from servers to clients, other drives, or backup systems.
RAID is a method of storing data that takes advantage of multiple disks, either to create one massive virtual storage space or to provide redundancy. Raid has several implementations that are distinguished by a single trailing number, ie: RAID-0, RAID-6. RAID stands for Redundant Array of Independent Disks, and for once the acronym is actually rather appropriate.
RAID technology can be implemented in a variety of ways, but you most commonly hear of “Hardware RAID”, where the technology is implemented via hardware settings, and “Software RAID” where it is controlled by the operating system and software.
Some of the lower levels of RAID, ie: treating multiple disks as one mass storage area, or disk mirroring, can occasionally be found in computer enthusiasts personal computers, but the technology is more often implemented on servers, and is practically a staple of any server environment. The main purpose of RAID in servers is to provide redundancy in the case of disk failure. A typical RAID array will have three or more disks in it that the system treats as if they were one. When data is written to one disk it can be mirrored onto the others, or, part of each disk can be used to store different data and the rest of the disk is used to act as a sort of cliff-notes of the other disks in the array. The more disks you have, the better your redundancy, and the less likely that any disk failure will result in lost data or downtime for the server.
Another common feature of servers that you will rarely, if ever, see in a personal computer is multiple processors. Processing is a huge part of a servers core function, whether they’re just routing network traffic, providing data for Internet users to access, or updating and storing massive amounts of data. The “brain” is always going to be tasked to the limit. Much like two heads are better than one, having multiple processors allows the server to handle its workload much quicker and more efficiently. Fortunately server operating systems are designed to be able to recognize multiple processors and assign them tasks or workloads individually so as not to cause any mixups in the system.
This post is part of the series: Comprehensive Guide to Server Hardware and Technology
Servers are very different from your average computer, and contain technology that is only rarely found in other systems. This guide explains the roles servers play and how their unique hardware and technologies helps them fill those roles. We also look at how to maintain and house servers.