NFS 4.2 multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi I have a NFS 4.2 server with 6x100 Gb/s ports.

The NFS server connected via two switches to the ESXi host,  3 NFS
server ports per switch.

Three VMs running with Oracle Linux 8.10 (kernel 4.18.0-553)

On the VMs I mounted NFS share 6 times with 6 different ip addresses

mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.90:/exports/lv_prd6 /deep-storage2
mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.91:/exports/lv_prd6 /deep-storage2
mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.92:/exports/lv_prd6 /deep-storage2
mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.93:/exports/lv_prd6 /deep-storage2
mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.94:/exports/lv_prd6 /deep-storage2
mount -o nconnect=16,max_connect=6,trunkdiscovery
10.44.41.95:/exports/lv_prd6 /deep-storage2

During test I/O workload (8 fio jobs with different files per VM),
looking at two switches I see that single port shows much higher
bandwidth -

2 250 000 TX packets per second

and rest of the traffic spreads across all other 5 ports evenly -

900 000 TX packets per second

Please help determine why traffic doesn't spread evenly across all 6
ports, more like 5+1 ?

Seems like nconnect=16 works only for the 1st of 6 ip addresses.




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux