• 3

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I want to achieve the best performance with my two servers (SAN and ESX). I have good RAID (LSI, SASes), that is showing 1GB/s result, so the bottleneck now is network part since I have only two NICs on each port (Intels). I have also 4 Broadcoms.

If I`ll team 2 Intels with 2 Broadcoms on each server, will I achieve 4 Gbps performance?

I know that 10Gb NICs would suit my needs better but I do not have this option right now.

Yes, this works fine in ESX/ESXi - we do it on all our servers.

Just make sure that you realize the implications - you'll loose any features that isn't supported on both the NIC's (like certain type of offloading).

  • 5
Reply Report

Using bonding in linux this should work flawlessly. And since ESX is pretty much linux-based, i expect it to work as well. Not sure about windows though.

  • 1
Reply Report

The problem with that is that teaming or bonding does not increase bandwidth between two stations.

Teaming can increase the overall bandwidth available to a host and - if LACP or the like is supported - on a switch as well, but the consumers of that bandwidth must be multiple stations/hosts/IPs.

The only way to increase bandwidth between host A and host B using multiple network connections is to... use multiple network connections.

You would have to assign each NIC on each end a separate (virtual) IP and route traffic appropriately.

PS. vmware bonding primarily offers physical NIC failover, and connectivity for multiple virtual port groups to the outside in a flexible manner.

Increasing point-to-point bandwidth is not what it does either.

EDITed just in case this wasn't clear: no, connecting 2 systems with multiple NICs in each does NOT increase the bandwidth between them.

  • 1
Reply Report
      • 1
    • This isn't entirely correct.. I use 4 NIC's for iSCSI and use round-robin MPIO to balance the load across the nic's. You're right in the way that you can't get a single datastream faster than a single path, but remember that you usually run more than 1 VM on a ESX host. You should edit your answer to reflect this.
    • pauska: presumably the VM's do not access the storage directly. However using iscsi multipathing (on separate subnets) is a great suggestion.
    • Thanks. Nut ther main question was if the fact that NICs are from different vendors should affect total performance?

Trending Tags