Re: [PATCH 2/2] nfsd: give out fewer session slots as limit approaches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 25 2017, J. Bruce Fields wrote:

> From: "J. Bruce Fields" <bfields@xxxxxxxxxx>
>
> Instead of granting client's full requests until we hit our DRC size
> limit and then failing CREATE_SESSIONs (and hence mounts) completely,
> start granting clients smaller slot tables as we approach the limit.
>
> The factor chosen here is pretty much arbitrary.

Hi Bruce....
 I've been looking at this patch - and the various add-ons that fix it :-(
 There seems to be another problem though.
 Prior to this patch, avail would never exceed
    nfsd_drc_max_mem - nfsd_drc_mem_used
 since this patch, avail will never be less than slotsize, so it could
 exceed the above.
 This means that 'num' will never be less than 1 (i.e. never zero).
 num * slotsize might exceed
    nfsd_drc_max_mem - nfsd_drc_mem_used
 and then nfsd_drc_mem_used would exceed nfsd_drc_max_mem

 When that happens, the next call to nfsd4_get_drc_mem() will evaluate

	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;

 which will be very large (unsigned long) indeed.  Maybe not the intention.

 I would have sent a patch to fix this, except that it bothers me that
 nfsd4_get_drc_mem() could ever return 0 (it cannot at the moment, but
 would after a "fix").  That would result in check_forechannel_attrs()
 returning nfserr_jukebox, and the client retrying indefinitely (which
 is exactly the symptom I have reported by a customer with a 4.12
 kernel).
 This isn't nice behaviour.

 Given that the server makes no attempt to reclaim slot memory for
 clients, would NFS4ERR_RESOURCE be a better error here?

 Also, I'd like to suggest that the '1/3' heuristic be change to 1/16.
 Assuming 30 slots get handed out normally (which my testing shows -
 about 2k each, with an upper limit of 64k):
   When 90 slots left, we hand out
    30 (now 60 left)
    20 (now 40 left)
    13 (now 27 left)
     9 (now 18 left)
     6 (now 12 left)
     4 (now 8 left)
     2 (now 6 left)
     2 (now 4 left)
     1
     1
     1
     1
 which is a rapid decline as clients are added.
 With 16, we hand out 30 at a time until 480 slots are left (30Meg)
 then: 30 28 26 24 23 21 20 19 18 6 15 15 14 13 12 11 10 10 9 9 8 8 7 7
    6 6 5 5 5 5 4 4 4 3 3 3 3 3 3 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1
    1 1
 slots per session

 Am I convincing?

To make it more concrete: this is what I'm thinking of.  Which bits do
you like?

Thanks,
NeilBrown

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 7857942c5ca6..5d11ceaee998 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1573,11 +1573,15 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
 	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
 	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
 	/*
-	 * Never use more than a third of the remaining memory,
+	 * Never use more than a 1/16 of the remaining memory,
 	 * unless it's the only way to give this client a slot:
 	 */
-	avail = clamp_t(unsigned long, avail, slotsize, total_avail/3);
+	avail = clamp_t(unsigned long, avail, slotsize, total_avail/16);
 	num = min_t(int, num, avail / slotsize);
+	if (nfsd_drc_mem_used + num * slotsize > nfsd_drc_max_mem)
+		/* Completely out of space - sorry */
+		num = 0;
+
 	nfsd_drc_mem_used += num * slotsize;
 	spin_unlock(&nfsd_drc_lock);
 
@@ -3172,7 +3176,7 @@ static __be32 check_forechannel_attrs(struct nfsd4_channel_attrs *ca, struct nfs
 	 */
 	ca->maxreqs = nfsd4_get_drc_mem(ca);
 	if (!ca->maxreqs)
-		return nfserr_jukebox;
+		return nfserr_resource;
 
 	return nfs_ok;
 }

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux