The Future of Disk Access Time
The question is not eliminating hard disk latency so much as minimizing it: a complete lack of latency is impossible for hard disks. For mechanical latency, how much this can be minimized is limited by the fact that a disk has to physically spin and cannot do so in an instantaneous manner (or even close to it). Nor, in a more minute sense, can bits of data move around within a computer without taking any time to do so. There is little that hardware developers can do from this perspective, though granted, we've come a long way.
The delay due to spin up can be decreased by having the disk always spinning. However, this sucks up a lot of power and can wear the hard disk out more quickly. Many operating systems have powersave programs that seek to minimize the number of times that hard drives need to be accessed by keeping more things in cache. As better algorithms develop and more cache memory is available, this will likely be improved a bit.
Seek time can be somewhat minimized as disks get smaller and more compact, which makes it so that the read/write head does not have to physically move as much. This has been improving in leaps and bounds over the last few decades. Rotational delay is also ever being decreased as hard disks spin faster and faster, pushing the limits to how much heat can be safely generated by these activities.
Latency—and how it can never entirely quite go away—has powerful implications for developing technologies. Cloud computing, for instance, is especially vulnerable to latency. There's not all that much that can be done however, due to the physical restrictions already mentioned. For this reason, there are many other technologies rising up that threaten to replace hard disks, largely because of their lower latencies (such as SSDs or solid state drives).