• 9

I have bought two Intel DC S4500 480 SSDs (comes with 5 year warranty) to use with a HP G7 DL360 1U Server in RAID1. Unfortunately after purchasing the disks, I figured out that the integrated P410i RAID Controller supports 6Gbps SAS but not 6Gbps SATA.

These SSDs got 500MB/s sequential read but with 3Gbps interface that is limited to ~300MB/s. I also have a couple of 6Gbps SAS Drives (old - has no warranty/reliability as new one). Now my question is which will perform better for a web app hosting/KVM virtualization environment:

(a) 2 SSDs in RAID 1 (with 3Gbps interfaces) or 2 SAS 10k Drives in RAID 1 (with 6Gbps interfaces) and why?

(b) Also, compared to 2 x 6Gbps SSDs in RAID1, how much is 2 x 3Gbps SSDs in RAID1 performance noticeable?

In most scenarios, including common virtualization workloads, the SSDs will outperform the HDD drives due to lower latency and higher IOPS.

The link speed is a lot less relevant than latency or IOPS in most real world workloads, so I guess the difference is not too big. If you want actual numbers, you'll have to perform a benchmark with your workloads as this really varies greatly depending on your specific setup. The link speed will be most relevant for large I/O operations, e.g. copying large files.

  • 8
Reply Report

It depends on what you will need the server for.

Without a clear usecase I would pick the SSD drives over SAS.

The reasoning is quite simple. Any SSD will beat any spinning medium every time when it comes to seek times. The more random access you have, the more the favor leans toward the ssd drives.

On top of that you didn't mention the throughput and details of the SAS drives. The interface says nothing about the real throughput at all. It could be that they only manage 200 MB/s after all, or any random number.

  • 2
Reply Report

I'm only going to address the performance of case (b). I have noticed a time difference when using spinning rust in 3g and 6G environments for large sequential reads and writes. How this would translate to SSD I do not know

My decidedly non-empirical results are is from needing to move over 6TB of 350KB files between older systems on multiple occasions for different customers.

These were disk to disk transfers. In the same chassis and same controller. They were three drive Raid5 sets for both source & destination.

  • 3Gps drives writing to 6gps drives on a 3Gps controller gave 3Gps performance.

  • 3Gps drives writing to 6gps drives on a 6Gps controller gave faster than 3Gps throughput because the writes were at 6Gps. Although the read speed is generally faster than the write speed on any device. If I had to put a number on it, about 4.2Gbs

  • 6Gps drives writing to 6gps drives on a 6Gps controller gave 6Gps performance.

  • 0
Reply Report

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags