• 14
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I have a high-IO application that is working, and scaling well to date. Over the past few months we've been trying to look down the road and predict where our next bottlenecks will occur. One of them is surely the file system.

We are currently monitoring

  • Space available
  • Read operations per second
  • Write operations per second

This seems a bit too sparse to me. What else should I be watching? I am not even sure what the 'yellow line' would be for the operations/second.

Some suggestions

  • Look at the read queue size, if your application is highly random, then tweak the readahead in /sys/block/<dev>/queue/read_ahead_kb to ensure you're reading data you need, not data the OS thinks you need.
  • Switch to the deadline scheduler if you haven't already
  • use the noatime mount option unless you're hosting a mail spool
  • mount with data=writeback if you've got good backups
  • keep an eye on your directory sizes, sure hashed directory inodes help, but if you can hash the data yourself then you'll get more consistent results
  • 4
Reply Report
      • 2
    • Our directory structure is pretty well hashed as is, but will look again. What would you suggest as "too big" or "too many files" for a directory?
    • Take a look at how squid hashes their proxy store. They've got some good documentation on choosing a hashing depth and the tradeoffs

Trending Tags