On 5/20/2022 2:12 PM, David Howells wrote:
Tom Talpey <tom@xxxxxxxxxx> wrote:
SoftROCE is a bit of a hot mess in upstream right now. It's
getting a lot of attention, but it's still pretty shaky.
If you're testing, I'd STRONGLY recommend SoftiWARP.
I'm having problems getting that working. I'm setting the client up with:
rdma link add siw0 type siw netdev enp6s0
mount //192.168.6.1/scratch /xfstest.scratch -o rdma,user=shares,pass=...
and then see:
CIFS: Attempting to mount \\192.168.6.1\scratch
CIFS: VFS: _smbd_get_connection:1513 warning: device max_send_sge = 6 too small
CIFS: VFS: _smbd_get_connection:1516 Queue Pair creation may fail
CIFS: VFS: _smbd_get_connection:1519 warning: device max_recv_sge = 6 too small
CIFS: VFS: _smbd_get_connection:1522 Queue Pair creation may fail
CIFS: VFS: _smbd_get_connection:1559 rdma_create_qp failed -22
CIFS: VFS: _smbd_get_connection:1513 warning: device max_send_sge = 6 too small
CIFS: VFS: _smbd_get_connection:1516 Queue Pair creation may fail
CIFS: VFS: _smbd_get_connection:1519 warning: device max_recv_sge = 6 too small
CIFS: VFS: _smbd_get_connection:1522 Queue Pair creation may fail
CIFS: VFS: _smbd_get_connection:1559 rdma_create_qp failed -22
CIFS: VFS: cifs_mount failed w/return code = -2
in dmesg.
Problem is, I don't know what to do about it:-/
It looks like the client is hardcoding 16 sge's, and has no option to
configure a smaller value, or reduce its requested number. That's bad,
because providers all have their own limits - and SIW_MAX_SGE is 6. I
thought I'd seen this working (metze?), but either the code changed or
someone built a custom version.
Namjae/Long, have you used siw successfully? Why does the code require
16 sge's, regardless of other size limits? Normally, if the lower layer
supports fewer, the upper layer will simply reduce its operation sizes.
Tom.