Re: [pnfs] [PATCH 2/2] nfsd41: add RPC header size to fore channel negotiation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 21, 2009 at 08:29:30AM -0400, William A. (Andy) Adamson wrote:
> On Fri, Sep 18, 2009 at 5:12 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote:
> > On Fri, Sep 18, 2009 at 12:47:57PM -0400, William A. (Andy) Adamson wrote:
> >> On Mon, Sep 14, 2009 at 3:03 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote:
> >> > On Fri, Sep 11, 2009 at 06:52:55PM -0400, andros@xxxxxxxxxx wrote:
> >> >> From: Andy Adamson <andros@xxxxxxxxxx>
> >> >>
> >> >> Both the max request and the max response size include the RPC header with
> >> >> credential (request only)  and verifier as well as the payload.
> >> >>
> >> >> The RPCSEC_GSS credential and verifier are the largest. Kerberos is the only
> >> >> supported GSS security mechansim, so the Kerberos GSS credential and verifier
> >> >> sizes are used.
> >> >
> >> > Rather than trying to estimate this is might be simplest just to use
> >> > what the server's using to allocate memory: RPCSVC_MAXPAGES.  No, that
> >> > also takes into account space for the reply.  You could do
> >> >
> >> >        PAGE_SIZE * (1 + (RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE)
> >> >
> >> > Actually, by design the server's real limit is actually on the sum of
> >> > the request and the reply sizes.
> >>
> >> I think the actual limit is svc_max_payload rounded up to a multiple
> >> of PAGE_SIZE plus PAGE_SIZE. which is a lot smaller than the sum of
> >> the request and reply sizes. See below.
> >
> > Right.   I think you're agreeing with me?
> 
> yes!
> 
> >
> >> Note that svc_max_payload is what is returned in nfs4_encode_fattr for
> >> MAXREAD and for MAXWRITE. These attributes use svc_max_payload in the
> >> same way this patch does - the maximum data size not including rpc
> >> headers.
> >>
> >> I don't think the server wants is to advertise a MAXREAD/WRITE that it
> >> can't supply because the fore channel maxrequest/maxresponse is too
> >> small, so some additional space needs to be added to svc_max_payload
> >> for the fore channel.
> >
> > Yes.
> 
> For the additional space, shall we use what this patch calculates or
> some other metric?

I guess something like roundup(serv->sv_max_payload + PAGE_SIZE,
PAGE_SIZE) is the best we can do for now.

> >> > What happens if we get a request such that both the request and reply
> >> > are under our advertised limits, but the sum is too much?  Can we just
> >> > declare that no client will be that weird and that we shouldn't have to
> >> > worry about it?
> >>
> >> I think the server already has this problem. In svc_init_buffer which
> >> sets up the pages for a server thread request/response handling, it
> >> uses sv_max_mesg / PAGE_SIZE + 1 with the comment
> >>
> >> "extra page as we hold both request and reply. We assume one is at
> >> most one page"
> >>
> >> where
> >> sv_max_mesg = roundup(serv->sv_max_payload + PAGE_SIZE, PAGE_SIZE).
> >
> > Right.  The difference is that now it looks to me like we're actually
> > going to start promising that we support the large request + large
> > response case, when actually we don't.
> 
> OK - I see your point. With MAXREAD or MAXWRITE we only promise either
> a large request or a large response per compound, not a large request
> and a large response in a single compound, e.g. a read and a write in
> the same compound.
> 
> >
> > I guess the problem's unlikely to arise, so maybe it's not worth fixing.
> > But it's annoying to have yet another undocumented restriction on the
> > compounds we'll accept.
> 
> I wonder what other servers are doing.

Yeah.  For now I guess we should ignore this.  (If you wanted to, in the
same patch, add a note about this problem to the end of
Documentation/filesystems/nfs41-server.txt, that would be good.)

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux