> -----Original Message----- > From: J. Bruce Fields [mailto:bfields@xxxxxxxxxxxx] > Sent: Monday, June 09, 2008 1:54 PM > To: Weathers, Norman R. > Cc: linux-nfs@xxxxxxxxxxxxxxx > Subject: Re: Problems with large number of clients and reads > > On Mon, Jun 09, 2008 at 09:19:03AM -0500, Weathers, Norman R. wrote: > > >I'd've thought that suggests a leak of memory allocated by > kmalloc(). > > > > >Does the size-4096 cache decrease eventually, or does it stay that > > >large until you reboot? > > > > I would agree that it "looks" like a memory leak. If I restart NFS, > > the size-4096 cache goes from 12 GB to under 50 MB, > > And restarting nfsd is the only thing you've found that will do this? > (So decreasing the number of threads, or stopping all the client won't > do anything to the size-4096 number?) Unfortunately, I cannot stop the clients (middle of long running jobs). I might be able to test this soon. If I have the number of threads high, yes I can reduce the number of threads and it appears to lower some of the memory, but even with as little as three threads, the memory usage climbs very high, just not as high as if there are say 8 threads. When the memory usage climbs high, it can cause the box to not respond over the network (ssh, rsh), and even be very sluggish when I am connected over our serial console to the server(s). This same scenario has been happening with kernels that I have tried from 2.6.22.x on to the 2.6.25 series. The 2.6.25 series is interesting in that I can push the same load from a box with the 2.6.25 kernel and not have a load over .3 (with 3 threads), but with the 2.6.22.x kernel, I have a load of over 3 when I hit the same conditions. Also, this is all with the SLAB cache option. SLUB crashes everytime I use it under heavy load. > > > but then depending > > upon how hard the box is utilized, it starts to climb back up. > > > I have > > seen it climb back up to 3 or 4 GB right after the restart, but that > > is much better because the regular disk cache will grow > from the 2 GB > > that it was pressured into back to 5 or 8 GB, so all of the > files have > > been reread into memory and things are progressing smoothly. It is > > weird. I really think that this has to do with a lot of connections > > happening at once, because I can run slabtop and see a node that is > > running full out, but only have a couple hundred megs of > the size-4096 > > slab being used, and then turn around and see another node that is > > pushing out 245 MB/s and all of the sudden using over 12 GB of the > > size-4096. It is very odd... If I lower the number of > threads from a > > usable 64 to a low of 3 threads, I have less of a chance of the > > servers going haywire, to the point of being so loaded they > may crash > > or you cannot contact them over the network (fortunately, I have > > serial on these boxes so that I can get on the nodes if they reach > > that point). If I run 8 threads, and with enough clients, > I can bring > > down one of these servers. size-4096 goes through the roof, and > > depending on the hour of the day, the server can either crash or > > becomes unresponsive. > > These are doing only NFS v2 and v3? (No v4?) > > --b. > It should only be NFS v3 tcp. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html