Re: NFS over RDMA issues on Linux 5.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 04, 2020 at 12:52:27PM +0200, Timo Rothenpieler wrote:
> On 04.08.2020 11:36, Leon Romanovsky wrote:
> > On Mon, Aug 03, 2020 at 12:24:21PM -0400, Chuck Lever wrote:
> > > Hi Timo-
> > >
> > > > On Aug 3, 2020, at 11:05 AM, Timo Rothenpieler <timo@xxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > Hello,
> > > >
> > > > I have just deployed a new system with Mellanox ConnectX-4 VPI EDR IB cards and wanted to setup NFS over RDMA on it.
> > > >
> > > > However, while mounting the FS over RDMA works fine, actually using it results in the following messages absolutely hammering dmesg on both client and server:
> > > >
> > > > > https://gist.github.com/BtbN/9582e597b6581f552fa15982b0285b80#file-server-log
> > > >
> > > > The spam only stops once I forcibly reboot the client. The filesystem gets nowhere during all this. The retrans counter in nfsstat just keeps going up, nothing actually gets done.
> > > >
> > > > This is on Linux 5.4.54, using nfs-utils 2.4.3.
> > > > The mlx5 driver had enhanced-mode disabled in order to enable IPoIB connected mode with an MTU of 65520.
> > > >
> > > > Normal NFS 4.2 over tcp works perfectly fine on this setup, it's only when I mount via rdma that things go wrong.
> > > >
> > > > Is this an issue on my end, or did I run into a bug somewhere here?
> > > > Any pointers, patches and solutions to test are welcome.
> > >
> > > I haven't seen that failure mode here, so best I can recommend is
> > > keep investigating. I've copied linux-rdma in case they have any
> > > advice.
> >
> > The mentioning of IPoIB is a slightly confusing in the context of NFS-over-RDMA.
> > Are you running NFS over IPoIB?
>
> For all I'm aware, NFS over RDMA still needs an IP and port to be targeted
> to, so IPoIB is mandatory?
> At least the admin guide in the kernel says so.
>
> Right now I actually am running NFS over IPoIB (without RDMA), because of
> the issue at hand. And would like to turn on RDMA for enhanced performance.
>
> >  From brief look on CQE error syndrome (local length error), the client sends wrong WQE.
>
> Does that point at an issue in the kernel code, or something I did wrong?
>
> The fstab entries for these mounts look like this:
>
> 10.110.10.200:/home /home nfs4
> rw,rdma,port=20049,noatime,async,vers=4.2,_netdev 0 0
>
> Is there anything more I can investigate? I tried turning connected mode off
> and lowering the mtu in turn, but that did not have any effect.

Chuck,

You probably know which traces Timo should enable on the client.
The fact that NFS over (not-enahnced) IPoIB works highly reduces
driver/FW issues.

Thanks



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux