Re: [PATCH v2 00/18] NFS/RDMA client patches for v4.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/26/2016 9:57 AM, Chuck Lever wrote:

On Apr 26, 2016, at 10:13 AM, Steve Wise <swise@xxxxxxxxxxxxxxxxxxxxx> wrote:

Hey Chuck, I'm testing this series on cxgb4.    I'm running 'iozone -a -+d -I' on a share and watching the server stats.  Are the starve numbers expected?

Yes, unless you're seeing much higher numbers than
you used to.


Every 5.0s: for s in  /proc/sys/sunrpc/svc_rdma/rdma_* ; do echo -n "$(basename $s): "; cat $s; done                              Tue Apr 26 07:10:17 2016

rdma_stat_read: 379872
rdma_stat_recv: 498144
rdma_stat_rq_poll: 0
rdma_stat_rq_prod: 0
rdma_stat_rq_starve: 675564

This means work was enqueued on the svc_xprt, but by the
time the upper layer invoked svc_rdma_recvfrom, the work
was already handled by an earlier wake-up.

I'm not exactly sure why this happens, but it seems to be
normal (if suboptimal).


rdma_stat_sq_poll: 0
rdma_stat_sq_prod: 0
rdma_stat_sq_starve: 1748000

No SQ space to post a Send, so the caller is put to sleep.

The server chronically underestimates the SQ depth, especially
for FRWR. I haven't figured out a better way to estimate it.

But it's generally harmless, as there is a mechanism to put
callers to sleep until there is space on the SQ.



Thanks.

With this iw_cxgb4 drain fix applied:

[PATCH 3/3] iw_cxgb4: handing draining an idle qp

http://www.spinics.net/lists/linux-rdma/msg34927.html

The series tests good over cxgb4.

Tested-by: Steve Wise <swise@xxxxxxxxxxxxxxxxxxxxx>

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux