I would like to build a Linux load balancer with SSL off load with sticky sessions. I would like to do this with either Pound or Pound and HAProxy. I haven't done this before, and I have been wanting to learn about both HAProxy and Pound for a while now. Finally I have a small use case for it to use as my excuse to get my hands dirty.
The site is a forum with a peak of ~4Mbps of throughput, which is a lot of posts and reads I think! So I don't want a high throughput device, I'm more concerned about concurrent users.
I have the following queries though;
- Where does the bulk of the work load exist on the load balancer, is it the CPU decoding the SSL traffic, or RAM caching sessions for sticky sessions?
Following on from query 1, I have a spare server I would like to use, but how can I relate server hardware specification required against web application performance required?
I have a small 1u server (PowerEdge 1850), with 2x76GB 10k Ultra 320 SCSI drives in RAID1, 2x 3Ghz single core Xeons (800Mhz Bus with 2MB L2 cache), and 6x 1GB sticks of PC2-3200 400Mhz RAM. I would like to use this, but I have no experience with HAProxy and Pound so I can't say if this will be apt or not. I would assume 6GBs of RAM is way too much looking at hardware load-balancer specs. What do others think about the CPU and HDDs?
This isn't a shopping thread, so don't post server models that would be suitable. Instead, what I would like if this isn't up to the task, is to go back to query (1) so I can build something else that will be sufficient. I have lots of experience with server deployments, but not these two packages.