Jonathan Derrick <jonathan.derrick@xxxxxxxxx> 于2022年12月22日周四 17:15写道: > > > > On 12/22/22 12:26 AM, korantwork@xxxxxxxxx wrote: > > From: Xinghui Li <korantli@xxxxxxxxxxx> > > > > Commit ee81ee84f873("PCI: vmd: Disable MSI-X remapping when possible") > > disable the vmd MSI-X remapping for optimizing pci performance.However, > > this feature severely negatively optimized performance in multi-disk > > situations. > > > > In FIO 4K random test, we test 1 disk in the 1 CPU > > > > when disable MSI-X remapping: > > read: IOPS=1183k, BW=4622MiB/s (4847MB/s)(1354GiB/300001msec) > > READ: bw=4622MiB/s (4847MB/s), 4622MiB/s-4622MiB/s (4847MB/s-4847MB/s), > > io=1354GiB (1454GB), run=300001-300001msec > > > > When not disable MSI-X remapping: > > read: IOPS=1171k, BW=4572MiB/s (4795MB/s)(1340GiB/300001msec) > > READ: bw=4572MiB/s (4795MB/s), 4572MiB/s-4572MiB/s (4795MB/s-4795MB/s), > > io=1340GiB (1438GB), run=300001-300001msec > > > > However, the bypass mode could increase the interrupts costs in CPU. > > We test 12 disks in the 6 CPU, > Well the bypass mode was made to improve performance where you have >4 > drives so this is pretty surprising. With bypass mode disabled, VMD will > intercept and forward interrupts, increasing costs. We also find the more drives we tested, the more severe the performance degradation. When we tested 8 drives in 6 CPU, there is about 30% drop. > I think Nirmal would want to to understand if there's some other factor > going on here. I also agree with this. The tested server is None io-scheduler. We tested the same server. Tested drives are Samsung Gen-4 nvme. Is there anything else you worried effecting test results?