• 8

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I am running several fileservers using this controller, filesystem and disk setup.

All of them are suffering from poor write performance, once the 256MB BBU Write Cache is full i get really high iowait (>40) and the write speed goes down to a few MB/s
It get's even worse if the servers are encountering medium to heavy reads while writing.

I am looking for suggestions on how to tweak the controller or the filesystem to improve write performance.

Some data about the Raid Array and Controller:

RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:5.456 TB
State: Optimal
Stripe Size: 64 KB
Number Of Drives:4
Span Depth:1
Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU
Access Policy: Read/Write
Disk Cache Policy: Enabled
Encryption Type: None

Product Name    : PERC 6/i Integrated
FW Version         : 1.22.12-0952
BIOS Version       : 2.04.00

Data about the filesystem:

Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

Default mount options are used and the filesystem was created using the default options of the mkfs.ext4 command.

Just to illustrate my use-case i will describe what these servers are doing
They are serving files via lighttpd at 40-80 MB/s, new files are periodically being downloaded to the servers via ftp.
The files are between 800MB and 6GB.
Serving the files works great, without any noticable IOWait, but everytime the ftp transfers kick in to get new files you can see it really struggeling.

as requested, here is the bonnie++ output:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
XXXXXXX          8G   580  99 94284  14 61903   9  2853  83 189033  11 420.5   8
Latency             14004us     825ms    1548ms     105ms     202ms   98036us
Version  1.96       ------Sequential Create------ --------Random Create--------
XXXXXXX             -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                  5 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency               406us     535us     598us     374us      21us      60us

The disks in use are D-WMAY03176700WDC WD2002FAEX-007BA0 on all servers

few random points:

  • go raid 10 [ you will lose the data in the process ]
  • mount all 'busy' filesystems with noatime option in fstab
  • experiment with different io schedulers - check what works best for you
  • your drives seem large-ish - most probably they have physical sector of 4KB rather than 512B - make sure your partitions are aligned to the disk & raid-stripe boundaries [ 1, 2 ; you will lose data in the process ]
  • i assume you have a lot of ram that is used for io buffers, if so - reconfigure your PERC/6i cache to be only for writes, no read ahead.
  • benchmark write speed again - let's say it's X; throttle uploads to eg. 60% of X to leave 'spare' IO for reads.
  • 4
Reply Report
      • 2
    • i have tried raid10, it made no difference in write performance, maybe because with raid10 it does not need to calculate parity, but it needs to write to two disks. will try noatime. i did experiment with io schedulers, anything other than deadline or noop made it alot worse. as for alignment, i will give it a try. i did just try setting the cache to write only, the server almost came to a halt, so i assume sustaining that much read-speed is not possible without read ahead.

You may run bonnie++ again with -n 1024, so it creates 1024 files instead of 5, all those +++ means that creating, reading and deleting 5 files was way too fast to give you any numbers to compare, this way you may know which optimizations suggested above by pQd can help

  • 0
Reply Report

Trending Tags