I am currently benchmarking a hard drive. I am using HW32 for the measurement.
The result has two parts:
random seek time: 20 ms
random read throughput: 30 Mbytes/s
I am not sure the methods HW32 was doing for the benchmark.
But I find it very strange for the random seek time result.
From my understanding, random seek time means
the time that is spent on where a specific data is. So I presume that for a
random read should contain many times of
random seek, right?
For example, I try to read 100MB of data from the disk. And because of the fragments of the disk, it has 1000 random blocks on the disk, each place has 100KB data. So when I read it, the disk head will have to move 1000 times to find all the data blocks, right?
random seek time is 20ms, then does that mean we will have to spend 1000 * 20 = 20,000ms = 20 sec on the random seeking? I guess not, right?
Can anyone explain to me? If a benchmark like HW32 tells me
random seek time = 20 ms, what does that mean? Does that mean
total random seek time for a random read or
average seek time?