Ok. Thank you. Are there any plans to add nconnect for all available IPs ? вт, 18 июн. 2024 г. в 19:43, Olga Kornievskaia <aglo@xxxxxxxxx>: > > On Tue, Jun 18, 2024 at 6:56 AM Anton Gavriliuk <antosha20xx@xxxxxxxxx> wrote: > > > > Hi I have a NFS 4.2 server with 6x100 Gb/s ports. > > > > The NFS server connected via two switches to the ESXi host, 3 NFS > > server ports per switch. > > > > Three VMs running with Oracle Linux 8.10 (kernel 4.18.0-553) > > > > On the VMs I mounted NFS share 6 times with 6 different ip addresses > > > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.90:/exports/lv_prd6 /deep-storage2 > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.91:/exports/lv_prd6 /deep-storage2 > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.92:/exports/lv_prd6 /deep-storage2 > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.93:/exports/lv_prd6 /deep-storage2 > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.94:/exports/lv_prd6 /deep-storage2 > > mount -o nconnect=16,max_connect=6,trunkdiscovery > > 10.44.41.95:/exports/lv_prd6 /deep-storage2 > > > > During test I/O workload (8 fio jobs with different files per VM), > > looking at two switches I see that single port shows much higher > > bandwidth - > > > > 2 250 000 TX packets per second > > > > and rest of the traffic spreads across all other 5 ports evenly - > > > > 900 000 TX packets per second > > > > Please help determine why traffic doesn't spread evenly across all 6 > > ports, more like 5+1 ? > > > > Seems like nconnect=16 works only for the 1st of 6 ip addresses. > > Nconnect only applies to the 1st (main) connection. In your example > there should be 16 connections established to 10.44.41.90 and then 5 > other trunked connections are going to be added to the group of the 16 > nconnect connections. RPCs are going to be round robined over the > 16+5 connections. 10.44.41.90 would see more traffic then the rest of > IPs because it has more connections.