Initially I started posting results for Open-MX over GigE on my Limulus Cluster. I used Netpipe MPI/TCP (2.4) for most of the tests. As Open MX requires Jumbo packets, I noticed that using Jumbo packets actually reduced the throughput! I'm still in the process of collecting data, so I cannot make any definite conclusions. I just noticed that the latest version of OpenMX (0.9.1) can run over standard frame sizes (1500). More results coming soon.
Kernel: 2.6.23
CPU: Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz
Interconnect: Intel 82572EI Gigabit Ethernet PCIe 1X
Switches: SMC 8505T (5 port), SMC 8508T (8 port), SMC GS16 (16 port), and a Cross-over Cable (Note: I tested a 5 port 3com and it would not only negotiate 100BT so it went back. I also tested an 8 port ProCurve and it work similarly to the SMC switch, more tests are needed.)
MPI: LAM, MPICH-MX (Open MX 0.6)
Summary (8505T, 8505T, X-over):
Interesting Comparisons (8505T, 8505T, X-over)
SMC GS16 Switch LAM MTU (note Jumbo frames still reduces throughput, except 3000!)