What Is Latency?
Broadly speaking, latency is the amount of time delay in a system. For hard drives in particular, it’s the amount of time required to retrieve data from the disk drive, the time difference between a request from the processesor and when that requested data is retrieved from the hard drive. This is referred to specifically as disk access time.
What Causes Latency?
For hard disks, this is principally caused by mechanical latency, that which is caused by the physical process of spinning the disk and reading it. To prevent confusion with the other sorts of delay, this is sometimes referred to as rotational latency.
Specifically, this includes spin up time, the time required to set the disk spinning, seek time, the amount of time it takes for the access arm to reach the correct disk track, rotational delay, the amount of time required for the disk to rotate measured in revolutions per second, and transfer time, the amount of time it takes for the data to be read or written. For a general explanation as to how a hard disk works, check out this article.
Hardware glitches can also be a cause of considerable latency. An amusing example of this can be found in this article.
The Future of Disk Access Time
The question is not eliminating hard disk latency so much as minimizing it: a complete lack of latency is impossible for hard disks. For mechanical latency, how much this can be minimized is limited by the fact that a disk has to physically spin and cannot do so in an instantaneous manner (or even close to it). Nor, in a more minute sense, can bits of data move around within a computer without taking any time to do so. There is little that hardware developers can do from this perspective, though granted, we’ve come a long way.
The delay due to spin up can be decreased by having the disk always spinning. However, this sucks up a lot of power and can wear the hard disk out more quickly. Many operating systems have powersave programs that seek to minimize the number of times that hard drives need to be accessed by keeping more things in cache. As better algorithms develop and more cache memory is available, this will likely be improved a bit.
Seek time can be somewhat minimized as disks get smaller and more compact, which makes it so that the read/write head does not have to physically move as much. This has been improving in leaps and bounds over the last few decades. Rotational delay is also ever being decreased as hard disks spin faster and faster, pushing the limits to how much heat can be safely generated by these activities.
Latency—and how it can never entirely quite go away—has powerful implications for developing technologies. Cloud computing, for instance, is especially vulnerable to latency. There’s not all that much that can be done however, due to the physical restrictions already mentioned. For this reason, there are many other technologies rising up that threaten to replace hard disks, largely because of their lower latencies (such as SSDs or solid state drives).