Hello again, this line looks suspicious to me: # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail> ext3_inode_cache 98472 150260 760 5 1 : tunables 54 27 8 : slabdata 30052 30052 189 Is it 1 big filesystem with about 1,342,177,280 inodes. Has this amount ever be tested in the wild? The Filesystem is btw. marked as needs_recovery. regards, Sascha 2009/1/2 Jon Stanley <jonstanley@xxxxxxxxx>: > On Thu, Jan 1, 2009 at 7:17 AM, Kostas Georgiou > <k.georgiou@xxxxxxxxxxxxxx> wrote: > >> Can you run blktrace+seekwatcher (both in EPEL) to get an idea on >> what is going on? An iostat -x -k /dev/sde 1 output will also be >> helpfull. > > Here's a slabinfo that someone else requested and the iostat. I don't > have access to the xen dom0 though, but I don't suspect it'd show much > different: > > I put it up on a webserver since gmail loves to chop up my lines and > make something like this unusable. See > http://palladium.jds2001.org/pub/nfs1-stats.txt > > _______________________________________________ > Fedora-infrastructure-list mailing list > Fedora-infrastructure-list@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list > -- Mit freundlichen Grüßen, / with kind regards, Sascha Thomas Spreitzer http://spreitzer.name/ _______________________________________________ Fedora-infrastructure-list mailing list Fedora-infrastructure-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list