• 15
name Punditsdkoslkdosdkoskdo

Flushing disk cache for performance benchmarks?

I'm doing some performance benchmark on some heavy SQL script running on postgres 8.4 on a ubuntu box (natty).

I'm experiencing some pretty un-stable performance, even though I'm supposed to be the only one running on the machine (the same script on the exact same data might run in 20m and then 40m for no specific reason).

So, remembering my distant DBA training, I decided I should flush the postgres cache, using sudo /etc/init.d/postgresql restart, but it's still shaky!

My question: maybe I'm missing some caches in my disk/os? I'm using a netapp appliance as my storage. Am I on the right track? Do I even want to make sure I get repeatable performance before I start tuning?

If your storage is network mounted, then activity on the network and storage appliance can change your results. There are several layers of caching involved in a configuration such as you are using.

  • Database cache
  • O/S cache
  • Netapp appliance cache
  • Disk/controller cache

In your case I would expect the O/S and netapp caches may be factors. More likely, it is access to data from the netapp appliance.

Many of these are difficult to flush. It has been my experience, that flushing caches is not really that useful. Unless you are running the query on an otherwise unused database/server, there are many factors which will have a larger impact on your results.

Even if you are the only user on the system, there are cron jobs which run periodically and use resources. See if you get stabler results if you run your test at the same number of minutes off the hour (9:15, 10:15, 11:15 ...).

You may want to setup a munin server to monitor your test server and see if you have similar profiles during different runs. Running sar in the background can provide useful information on bottlenecks. sar is provided by the atsar package.

  • 2
Reply Report

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags