• 12
name Punditsdkoslkdosdkoskdo

How does one 'qualify a server'

Here is the scenario. Say you are going to be a purchasing great deal of servers, but first are trying out hardware from a vendor, and you need to qualify it.

Also, say that the custom software that will run on it is not even written (or the current version is so 'alpha' it have much input into this process).

Edit: The hardware is CPU heavy and memory heavy, but light on disk usage. (mainly just logging).

At this point, with my limited experience, all I can think to do is install linux, and start running memory tests, hard drive tests, cpu tests--anything I can find by googling.

I don't mind doing this, but I am wondering if I'm missing something--perhaps some uber package that tests multiple facets of hardware (maybe even tests things I don't know to test).

Does anyone have any suggestions based on experience?

      • 2
    • You have to specify what the custom software is supposed to do. Does it even care about the underlying hardware (beyond having enough memory, disk space, etc.)? If it doesn't, then getting the box running with Linux is probably sufficient.
      • 1
    • edited. My edit is still a bit vague, but that's actually where I'm at; the question is unfortunately a bit open-ended on this end, too.

You're going to buy a lot of servers that have to be "qualified" for an application that isn't written yet. That's like buying a truck for a purpose you don't know yet.

There's really no way to "qualify" it unless you know what the requirements are. Usually qualified systems are those tested to work with a given application by using a tight set of spec'ed drivers, so you know that if you upgrade the video driver or disk controller or any other component, you say you won't support the application at that point because you didn't test those out in that configuration.

All you're describing with memory and disk tests is just burn-in, which most reputable vendors already do with equipment before shipping it to you (for servers.) If you want to waste time spinning your wheels doing that there's nothing really wrong with it, it's just wasting time.

You need to talk to the people making the application and find out what exactly the design requirements are, then install it, and test it, and note what drivers and software/hardware you're using, and if the application works, it's qualified.

If they don't know and expect you to magically ascertain the requirements to qualify their application when they haven't even made it yet, you're working for people who think system administration is magical.

You can't qualify a configuration for vaporware.

  • 5
Reply Report
      • 2
    • I would certainly agree you always want requirements up front. Sometimes that's just the way it goes, though.
      • 2
    • I can sympathize to a degree with "the way it goes," but they aren't just asking you to do something not recommended. They're asking for the impossible. Want to build a corral for a unicorn? You can take a guess how to do it, but you'll inevitably get it wrong.

You're talking about two different things. There's burn-in, which is needed to shake out hardware (and possibly, OS) issues before placing the system into production. Then there's performance testing. Comparing the system to a baseline. Understanding how the hardware works with your specific application. You need to be able to answer questions like:

  • Will SAS disks be good enough?
  • What RAID solution should I use?
  • Do I need SSDs? Will slow disks be enough?
  • Does adding more RAM have an appreciable effect on application performance?

For burn-in, I'll PXE boot the system into a memory or stress loop test (memtest works). If I do burn in after OS installation, I'll use the stress utility for some period of time to shake out any hardware issues. That tool can be set to stress the CPU, virtual memory, disk and other subsystems...

Some manufacturers (like HP) include a maintenance CD that can also run automated test loops on the installed hardware components.

For performance testing, I'll build the servers up and run something like the WHT UnixBench variant to obtain a composite relative score to compare other systems deployed in the environment. Make sure you receive similar results across the fleet of servers.

Specific testing of the networking and storage subsystems can be accomplished with the actual production application (simulated workload) or by using the normal suite of benchmarking tools (i.e. iperf for networking, iozone or bonnie++ for storage).

Really specific platform testing in a CPU speed or latency-sensitive environment can be accomplished using tuning tools like oscilloscope and cyclictest. This is also helpful for seeing how external loads impact the system. But that's probably too much for most server deployments...

The best performance tests will always come from the intended application and a realistic workload, though.

  • 2
Reply Report
      • 2
    • +1 both ewwhite and Bart, but I had to go with this one because of the good explanatory text. Bart is correct though too; there should be better requirements in this sort of situation.

Warm tip !!!

This article is reproduced from Stack Exchange / Stack Overflow, please click

Trending Tags