I have three servers all of them are OEL 5.7 and running on 10GbE (bond). After setup NFS mount point on these three servers (NFS server started on server 1) with default setting, however, i was getting only 25MB/s through dd if=/dev/zero of=/mnt/nfs bs=1024k count=1000. I wonder why? my own solaris box still faster than this. So i asked my friend and he came up with this http://www.slashroot.in/how-do-linux-nfs-performance-tuning-and-optimization after apply those tunings except MTU part. I instantly get 60-70 MB/s right away.
1048576000 bytes (1.0 GB) copied, 13.6247 seconds, 77.0 MB/s
My curiosity still going on, I wonder what is the maximum throughput I’ll get from NFS. So, I created a tmpfs and export it (tmpfs dd return around 1.6GB/s) and 170 MB/s is the maximum of my configuration. If anyone interested in how much NFS can deliver, you should check out this blog by Brendan Gregg https://blogs.oracle.com/brendan/entry/up_to_2_gbytes_sec (2008)
1048576000 bytes (1.0 GB) copied, 5.73885 seconds, 183 MB/s
The bottom line is you have to use this setting 10.x.x.x:/mnt/tmpfs /mnt/tmpfs nfs bg,async,intr,noatime,rsize=32768,wsize=32768 0 0
Ps. Netperf between these servers return 80-100 MB/s which is really question me, if i really running on 10GbE, I will update about this issue later on.