Author: Dave Norwood
I’ve been asked many times about iSCSI versus Fiber Channel for Storage Area Networks (SAN). While I believe iSCSI is a solid protocol appropriate for many applications, I’ve had many more performance problems versus Fiber Channel. This is why I shy away from iSCSI, especially 1Gb iSCSI. I’ve been selling 4Gb Fiber Channel for years, and it isn’t a mere four times faster than 1Gb iSCSI, it is more like five or six times as fast IMHO. Why?
Two big reasons: first, the TCP/IP header is much bigger than the FC (Fiber Channel) header. This means to move the same amount of data, you actually have to move more bits with iSCSI versus FC, this is due to all the TCP/IP header overhead. Second, FC switches usually have lower latency (how long it takes for a “frame” to move through the switch) than Ethernet switches. The latency issue gets really bad with a huge amount of small reads/writes. Guess what, databases have lots of small reads/writes.
So, 1Gb iSCSI is rated four times slower (vs. 4Gb), requires more overhead, and has a higher latency. 10Gb Ethernet helps, but today I mostly sell 8Gb Fiber Channel and I truly believe 8Gb FC is still as fast, if not faster than 10Gb Ethernet, and now 16Gb FC is shipping.
If you still want to use iSCSI, there are ways to minimize its deficiencies. Here are the things I would look at:
1) Use an enterprise class switch with low latency and large buffers.
2) Make sure there is no other traffic on the iSCSI/Ethernet switch.
3) Flow control enabled on the enterprise class switch.
4) Enable ToE on your Ethernet (iSCSI) ports on the servers, or get a true iSCSI HBA.
5) If you think the problem is bandwidth and not latency, then adding more ports to the server(s) and using link aggregation may help. You can also add/use more ports on the SAN and do link aggregation. If it is a VMware environment, instead of or in conjunction with link aggregation, you can dedicate iSCSI ports to particular Virtual Machines (spreading the load).
NOTE: Both the Ethernet/iSCSI NIC and the switch must support and be configured for link aggregation. All enterprise class switches support link aggregation. All “server class” NICs support link aggregation.