• 13
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

name Punditsdkoslkdosdkoskdo

ZFS write performance with SAS expander

I have a ZFS storage server running Ubuntu 16.04. I recently added a 12G SAS expander with 6 Seagate ST8000NM0055's.

When I create a pool of mirrored vdevs and try to sync data from another array in the server the performance is really bad (rsync copied around 130GB in an hour). Furthurmore it looks like the devices are barely being utilized while in the zfs pool but can be saturated when not in a pool.

The motherboard is a Super Micro X10DRi-T4+ and the card I am using is a LSI 9300-8E connected with a SFF-8644 to SFF-8644 SAS cable to a 12G expander.

In trying to figure it out I have destroyed the array and tested each disk separately but simultaneously using both hdparm -Tt for read and dd for writes.

Does anybody have any idea what could be causing this or better yet how I can rectify?

See below for capture of output from iostat -dmx 1 during the single drive and mirrored vdev pool.

asciicast of dd and hdparm test: https://asciinema.org/a/XPveZvDnpiU9REF6QG8KjfyVK (activity starts at about 23 second mark)

asciicast of mirrored vdev rsync (copying data from mirrored vdev pool on sdc-sdf that is internal to server to mirrored vdev pool on expander which are drives sdg-sdl: https://asciinema.org/a/mjv6aiPeoXdu5I1NSccb82MaV

      • 2
    • rsync hardly is a valid benchmark in this case. Can you recreate the pool and run some fio random and sequential benchmarks on it? When running benchmarks, be sure to use random data or to disable compression on ZFS.
      • 1
    • OK when I do that the numbers look much better. I guess this just turned into an rsync question? Or perhaps there is a zfs setting? I really would like to use rsync :)
    • if the source volume containts many small files, the copy will be inevitably slower than a "bulk" transfer. Anyway, you can try copying via cp -au src dst; this will give you a rsync-like behavior but with smaller overhead. Otherwise, simply use rsync and wait for the transfer to complete.

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags