I would like to validate a certain thought with you guys. Assuming that:
- The device whose bandwidth I am measuring is an internal, Linux-based, router.
- If a NIC is configured to 100mbps, the network is at least 100mbps (that is, the NIC's speed setting it what is limiting the bandwidth, not the network, its latency, etc.)
- Elements such as CPU have no impact on bandwidth.
Would it be correct to:
- Run ifconfig every 10 seconds
- Calculate the bytes TX+RX per second based on the ifconfig's results
- Multiply the bytes number by 8 (let's say this is X)
- Get the NIC speed settings (10mbps, 100mbps, 1000mbps, this is Y)
- Calculate the bandwidth percent usage as: X / (Y*1024*1024)
Is this correct? How does duplicity impact this calculation?