On 4/30/13 9:38 AM, Yan Burman wrote:
-----Original Message-----
From: Tom Talpey [mailto:tom@xxxxxxxxxx]
Sent: Tuesday, April 30, 2013 17:20
To: Yan Burman
Cc: J. Bruce Fields; Wendy Cheng; Atchley, Scott; Tom Tucker; linux-
rdma@xxxxxxxxxxxxxxx; linux-nfs@xxxxxxxxxxxxxxx; Or Gerlitz
Subject: Re: NFS over RDMA benchmark
On 4/30/2013 1:09 AM, Yan Burman wrote:
I now get up to ~95K IOPS and 4.1GB/sec bandwidth.
...
ib_send_bw with intel iommu enabled did get up to 4.5GB/sec
BTW, you may want to verify that these are the same GB. Many benchmarks
say KB/MB/GB when they really mean KiB/MiB/GiB.
At GB/GiB, the difference is about 7.5%, very close to the difference between
4.1 and 4.5.
Just a thought.
The question is not why there is 400MBps difference between ib_send_bw and NFSoRDMA.
The question is why with IOMMU ib_send_bw got to the same bandwidth as without it while NFSoRDMA got half.
NFSRDMA is constantly registering and unregistering memory when you use
FRMR mode. By contrast IPoIB has a descriptor ring that is set up once
and re-used. I suspect this is the difference maker. Have you tried
running the server in ALL_PHYSICAL mode, i.e. where it uses a DMA_MR for
all of memory?
Tom
>From some googling, it seems that when IOMMU is enabled, dma mapping functions get a lot more expensive.
Perhaps that is the reason for the performance drop.
Yan
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html