• 9

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

name Punditsdkoslkdosdkoskdo

Are ext4 RAID5 tuning options useful under LVM?

I'm setting up a system with a system with a mdadm RAID5 which is the sole PV for a VG which hosts 4 LVs.

When I make the filesystem, would the mkfs.ext4 -E option be useful? Or is its effect not possible to know because of LVM shenanigans?

-E extended-options
    Set extended options for the filesystem. Extended options are comma separated, and may take an argument using the equals ('=') sign. The -E option used to be -R in earlier versions of mke2fs. The -R option is still accepted for backwards compatibility. The following extended options are supported:  
    Configure the filesystem for a RAID array with stride-size filesystem blocks. This is the number of blocks read or written to disk before moving to the next disk, which is sometimes referred to as the chunk size. This mostly affects placement of filesystem metadata like bitmaps at mke2fs time to avoid placing them on a single disk, which can hurt performance. It may also be used by the block allocator. 
    Configure the filesystem for a RAID array with stripe-width filesystem blocks per stripe. This is typically stride-size * N, where N is the number of data-bearing disks in the RAID (e.g. for RAID 5 there is one parity disk, so N will be the number of disks in the array minus 1). This allows the block allocator to prevent read-modify-write of the parity in a RAID stripe if possible when the data is written. 

This makes sense only if you ensure that your PVs are aligned to the RAID chunk size (LVs should be automatically). You can check that by

pvs -o pe_start,pv_name --units s
dmsetup table name # with name what you see in /dev/mapper
  • 3
Reply Report

This doesn't asnwer your question, but it was already answered. However using a raid5 is a bad idea. It's too slow and too prone to failure. For a discussion see:

"As of August 2012, Dell, Hitachi, Seagate, Netapp, EMC, HDS, SUN Fishworks and IBM have current advisories against the use of RAID 5 with high capacity drives and in large arrays.[51] http://community.spiceworks.com/topic/251735-new-raid-level-recommendations-from-dell"

"when a disk fails in a RAID 5 array and it has to rebuild there is a significant chance of a non-recoverable read error during the rebuild (BER / UER). As there is no longer any redundancy the RAID array cannot rebuild"

I would strongly advice to use a raid10 or if you really need to make use of more space use a raid6. In mdadm you can use odd disk amounts, such as a 3 or 5 disk raid10:

  • -1
Reply Report
    • The op makes no mention of what size or speed these drives are. The RAID-5 issue you're talking about only applies to large slow disks (i.e. 7200rpm drives >= 1tb). For all we know he has 8x 15k SAS drives at 172Gb each.
      • 2
    • You're ignoring any size raid5s is very vulnerable, they allow only 1 disk failure. In addition they're horribly slow, 25% performance penalty is not unusual. It's a bit of a long shot to assume someone would really be foolhardy enough to be messing with a bunch of old tiny disks in a raid5 configuration when 1+ TB drives are the norm.

Trending Tags