• 4

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

name Punditsdkoslkdosdkoskdo

3.5“ 15k RPM drives vs 2.5” 10k RPM drives

How do 2.5" 10k RPM SAS drives perform compared to 3.5" 15k RPM SAS drives?

Specific areas of comparison:

  • Random write
  • Random read
  • Sequential write
  • Sequential read

You should be concerned with head seek time and transfer rate. It is true that they depend on the form factor, but they also depend on many other variables. Looking only at the physical size and ignoring those variables would be wrong.

With this in mind, let's compare the most recent versions of some widely used disks: Seagate Cheetah 15K.7 and Savvio 10K.3.

  • Random reads and writes: avg seek time of 3.4msec (read) and 3.9msec (write) vs. 4.2msec and 4.6msec, latency is 2.0 msec vs. 3.0msec. Therefore Cheetah should hit 185IOPS read and 170IOPS write, Savvio will do 140IOPS read and 130IOPS write.
  • Sequential reads and writes: looking at sustained transfer rates, we see 122-204MBps for Cheetah and 67-124MBps for Savvio. These are maximums, the range is so wide because of different amount of data per track on the inside and the outside of the disk.

Bottom line: 2.5" disks are comparable to 3.5" disks for random operations, and usually you can compensate or even win with 2.5" due to the larger number of spindles in the same physical volume (disk array or server). However, if you need to pump a lot of sequential data, 3.5" disk is still the king.

  • 5
Reply Report
      • 1
    • in this case however you could buy double the number of 10k drives for the price of the 15k drive, and unles the drives are using just the motherbord controller the array controler will change these numbers.
    • Your math won't work: 15K 3.5" disks are only 30% more expensive than 10K 2.5" disks. You're also ignoring the built-in cost of every disk slot in the server or the enclosure. And I am very puzzled by your mention of motherboard controllers: we're not talking here about home desktops with their little southbridges.
      • 1
    • You need to divide 1 second by the sum of average seek and latency. And, of course, I made one mistake. The correct numbers for Savvio should be 140 and 130 (I used the wrong latency number).

Prior to me starting my current job our ERP package had been moved to a new server by a predecessor, who had intended to decommission and scrap the old machine. The performance was appalling and both the vendor and that predecessor thought it was due to a configuration error, although they never found the cause. After taking over the job and examining all the factors it was clear to me that the 2.5 inch 10K SAS drives on the new server simply couldn't keep up. I therefore moved the ERP package back to the old server, which has 3.5 inch 15k SCSI drives. Performance is now back to where it should be. In both cases the drives are arranged as a 3 drive RAID 5 array.

On the subject of head movement actuator others have mentioned, they have failed to take into account that the actuator in 3.5 inch drives is generally considerably more powerful than that in a 2.5 inch drive. The consequence is that although it needs to travel a little further (about one tenth on an inch or less) it can do so more rapidly. This negates the perceived advantage of the smaller drive.

  • 1
Reply Report
      • 1
    • I'm not sure where you'd find that information online. I learned about talking to a design engineer from one of the drive manufacturers a couple of years ago. Off hand I can't even recall which manufacturer that was.

Every drive model is unique, and there are a ton of other factors that come into play beyond form factor and rotational speed, but in general:

  • Random performance mitigated by how quickly you can get the target data under the heads, so higher RPM ==> better random performance.
  • Sequential performance is determined by how fast the data passes under the heads, so is determined by a combination of the number of platters/heads, the speed the platters spin at, and the data density of the platters.
  • 0
Reply Report

More importantly, will the difference matter. If this is for a disk mirror set, you will not be able to notice the difference except in benchmarking tools, since controller cache, how the OS controls I/O and application I/O will all factor in to make the difference meaningless. However if you are loading up a SAN with 150 of these, you will notice a difference (as Chopper mentioned above) - you should still determine if that difference is worth the price, and many times that price difference could be the cost of more drives- which would make the difference irrelevant. See this storage advisors link for some published example details.

  • -6
Reply Report
      • 2
    • Whether the difference will matter depends on the application and what the difference actually is. And the RAID setup has much less to do with this than the application's IO characteristics.
      • 2
    • "you will not be able to notice the difference except in benchmarking tools"? Could you be any more wrong?
    • having benchmarked applications (usually sql based) on various subsystems I can say with authority that you can measure apps on 10k, vs 15k vs 2.5 vs 3.5 systems and it has no measureable impact. If you measure simply IOPS you can measure differences, and measuring IOPS is not measuring application performance. If you have some data that magically a drive with 4Gb of raid cache and say 5 10k drives will perform worse for applications than the same system with 3 15K drives I'd love to see it. Price/performance matters in the real world
      • 2
    • @sh-beta - this is absolutely correct. Certainly an app that does a whole bunch of sequential I/O vs and app that does alot of random I/O vs and app that does alot of repeated I/O are going to have different performance metrics on the exact same hardware (not to mention what filesystem and blocksize). The raid controller has far more of an impact than the actual drives because at some point (usually sooner than you think) you will saturate the controller - at which point the controller cache performance kicks in and why ARC has essentially replaced Sequential MRU cache algorithm

Trending Tags