At the dawn of the twenty first century, 10K RPM drives were the default choice for servers and SAN arrays. After all, they were fast enough for all but the most demanding applications and cheaper than the hot rod 15K RPM drives just coming to market . They also ran a bit cooler in the Proliant 1850Rs and other servers of the day that frankly didn't have the airflow engineering of today's servers making them a safe bet.
Of course, those were the days when even Windows server admins, let alone steely eyed storage guys, wouldn't dream of letting an ATA, serial or parallel, drive into their servers or arrays. ATA drives were for lowly PCs -- too cheap, slow and unreliable for serious use. I had one client that had a SCSI-only policy for servers so they went out and bought Iomega NAS boxes with ATA drives, which were less expensive than adding a similar amount of storage to their production file servers. NAS wasn't a server, so the SCSI-only policy didn't apply.
Now I'm wondering if we really need SSDs, 15K, 10K, 7200 RPM capacity oriented drives and the new miserly green drives the spin at around 5400 RPM. Aren't 5 tiers of disk drives in our arrays one or two to many? Thankfully, the rumors of Western Digital releasing a 20,000 RPM version of their VelociRaptor that were floating around last year haven't come to fruition or we'd have a head-spinning 6 choices. The green drives are really best suited to near-line applications like storing archival and backup data so we still have 4 choices when populating our production arrays.
Given the reports from Seagate that they will be charging the same price for 10K and 15K drives, and their inclusion of their Powertrim power management technology on even the 15 drives, why not spring for the 15Ks?
Ultimately, though the crack in my crystal ball obscures when, I see more vendors following Sun's lead and building mid-range systems that have just flash and capacity oriented drives. These systems will have to give up the standard paradigm of building RAID sets across small numbers of drives and then breaking the RAID set into volumes to hold data and instead put data in the most appropriate place based on policies and access patterns. This kind of arrangement would be easiest in the NAS space, where we all know the vast majority of data is created and then stored without being accessed and the system has more data to work with when making its data placement decisions.