Re: [PATCH RFC] svcrdma: Ignore source port when computing DRC hash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/5/2019 1:25 PM, Chuck Lever wrote:
Hi Tom-

On Jun 5, 2019, at 12:43 PM, Tom Talpey <tom@xxxxxxxxxx> wrote:

On 6/5/2019 8:15 AM, Chuck Lever wrote:
The DRC is not working at all after an RPC/RDMA transport reconnect.
The problem is that the new connection uses a different source port,
which defeats DRC hash.

An NFS/RDMA client's source port is meaningless for RDMA transports.
The transport layer typically sets the source port value on the
connection to a random ephemeral port. The server already ignores it
for the "secure port" check. See commit 16e4d93f6de7 ("NFSD: Ignore
client's source port on RDMA transports").

Where does the entropy come from, then, for the server to not
match other requests from other mount points on this same client?

The first ~200 bytes of each RPC Call message.

[ Note that this has some fun ramifications for calls with small
RPC headers that use Read chunks. ]

Ok, good to know. I forgot that the Linux server implemented this.
I have some concerns abot it, honestly, and it's important to remember
that it's not the same on all servers. But for the problem you're
fixing, it's ok I guess and certainly better than today. Still, the
errors are goingto be completely silent, and can lead to data being
corrupted. Well, welcome to the world of NFSv3.

Any time an XID happens to match on a second mount, it will trigger
incorrect server processing, won't it?

Not a risk for clients that use only a single transport per
client-server pair.

I just want to interject here that this is completely irrelevant.
The server can't know what these clients are doing, or expecting.
One case that might work is not any kind of evidence, and is not
a workaround.

And since RDMA is capable of
such high IOPS, the likelihood seems rather high.

Only when the server's durable storage is slow enough to cause
some RPC requests to have extremely high latency.

And, most clients use an atomic counter for their XIDs, so they
are also likely to wrap that counter over some long-pending RPC
request.

The only real answer here is NFSv4 sessions.


Missing the cache
might actually be safer than hitting, in this case.

Remember that _any_ retransmit on RPC/RDMA requires a fresh
connection, that includes NFSv3, to reset credit accounting
due to the lost half of the RPC Call/Reply pair.

I can very quickly reproduce bad (non-deterministic) behavior
by running a software build on an NFSv3 on RDMA mount point
with disconnect injection. If the DRC issue is addressed, the
software build runs to completion.

Ok, good. But I have a better test.

In the Connectathon suite, there's a "Special" test called "nfsidem".
I wrote this test in, like, 1989 so I remember it :-)

This test performs all the non-idempotent NFv3 operations in a loop,
and each loop element depends on the previous one, so if there's
any failure, the test imemdiately bombs.

Nobody seems to understand it, usually when it gets run people will
run it without injecting errors, and it "passes" so they decide
everything is ok.

So my suggestion is to run your flakeway packet-drop harness while
running nfsidem in a huge loop (nfsidem 10000). The test is slow,
owing to the expensive operations it performs, so you'll need to
run it for a long time.

You'll almost definitely get a failure or two, since the NFSv3
protocol is flawed by design. But you can compare the behaviors,
and even compute a likelihood. I'd love to see some actual numbers.

IMO we can't leave things the way they are.

Agreed!

Tom.


I'm not sure why I never noticed this before.

Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
---
  net/sunrpc/xprtrdma/svc_rdma_transport.c |    7 ++++++-
  1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 027a3b0..1b3700b 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -211,9 +211,14 @@ static void handle_connect_req(struct rdma_cm_id *new_cma_id,
  	/* Save client advertised inbound read limit for use later in accept. */
  	newxprt->sc_ord = param->initiator_depth;

-	/* Set the local and remote addresses in the transport */
  	sa = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.dst_addr;
  	svc_xprt_set_remote(&newxprt->sc_xprt, sa, svc_addr_len(sa));
+	/* The remote port is arbitrary and not under the control of the
+	 * ULP. Set it to a fixed value so that the DRC continues to work
+	 * after a reconnect.
+	 */
+	rpc_set_port((struct sockaddr *)&newxprt->sc_xprt.xpt_remote, 0);
+
  	sa = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.src_addr;
  	svc_xprt_set_local(&newxprt->sc_xprt, sa, svc_addr_len(sa));





--
Chuck Lever








[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux