----- Original Message ----- > From: "Bart Van Assche" <bart.vanassche@xxxxxxxxxxx> > To: "Laurence Oberman" <loberman@xxxxxxxxxx> > Cc: leon@xxxxxxxxxx, "Yishai Hadas" <yishaih@xxxxxxxxxxxx>, linux-rdma@xxxxxxxxxxxxxxx > Sent: Wednesday, June 15, 2016 8:51:18 AM > Subject: Re: multipath IB/srp fail-over testing lands up in dump stack in swiotlb_alloc_coherent() > > On 06/15/2016 02:02 PM, Laurence Oberman wrote: > > We are missing something here > > The source code excerpts in my previous e-mail came from the latest > Linux kernel (v4.7-rc3). Maybe older kernels behave in a different way. > > BTW, did you run into the "swiotlb buffer is full" error messages while > testing 4MB I/O? Have you already considered to reduce the memory that > is needed for RDMA queues by reducing the queue depth? I ran my SRP > tests with default swiotlb buffer size and with the following in > srp_daemon.conf: > > a queue_size=32,max_cmd_per_lun=32,max_sect=8192 > > Bart. > Hi Bart All my testing here has been 4MB I/O while restarting controllers. This is a customer requirement to be doing large sequential 4MB, buffered and O_DIRECT. I have 128, but will reduce to 32 and test it. My config is as follows per customer requirements. [root@jumptest1 ~]# cat /etc/ddn/srp_daemon.conf a queue_size=128,max_cmd_per_lun=128,max_sect=8192 Interestingly, I have absolutely no issue with ib_srp and testing all types of I/O on this very large array. Its rock solid upstream now since all the fixes we have now in ib_srp. The swiotlb seems, as already mentioned, to be only in reconnects and does NOT affect behavior of regular I/O. I will make this observation in the patch I will be sending for ib_srp* Documentation Thanks!! Laurence -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html