On Wed, Nov 27, 2013 at 06:43:51AM +0000, Al Viro wrote: > On Tue, Nov 26, 2013 at 06:12:53AM -0800, Christoph Hellwig wrote: > > On Tue, Nov 26, 2013 at 01:11:34PM +0000, Al Viro wrote: > > > .config, please - all I'm seeing on mine is a bloody awful leak somewhere > > > in VM that I'd been hunting for last week, so the damn thing gets OOMed > > > halfway through xfstests run ;-/ > > > # > > # Automatically generated file; DO NOT EDIT. > > # Linux/x86 3.12.0-hubcap2 Kernel Configuration > [snip] > > Could you post the output of your xfstests run? FWIW, with your .config > I'm seeing the same leak (shut down by turning spinlock debugging off, > it's split page table locks that end up leaking when they are separately > allocated) *and* xfs/253 seems to be sitting there indefinitely once > we get to it - about 100% system time, no blocked processes, xfs_db running > all the time for hours. No oopsen on halt with that sucker skipped *or* > interrupted halfway through. Might be that your xfsprogs is old enough that it has a bug that test wants to verify is fixed. > Setup is kvm on 3.3GHz amd64 6-core, with 4Gb given to guest (after having > one too many OOMs on leaks). virtio disk, with raw image sitting in a file > on host, xfstests from current git, squeeze/amd64 userland on guest. > Reasonably fast host disks (not that the sucker had been IO-bound, anyway). > Tried both with UP and 4-way SMP guest, same picture on both... I'm running on my laptop with a Dual Core 2.5Ghz i5, on preallocated raw files on XFS on an older Intel SSD. Qemu command line: kvm \ -m 2048 \ -smp 4 \ -kernel arch/x86/boot/bzImage \ -append "root=/dev/vda console=tty0 console=ttyS0,115200n8" \ -nographic \ -drive if=virtio,file=/work/images/debian.qcow2,cache=none,serial="test1234" \ -drive if=virtio,file=/work/images/test.img,cache=none,aio=native \ -drive if=virtio,file=/work/images/scratch.img,cache=none,aio=native It's probably enough to run ./check with -g quick to reproduce it, too - let me verify that which I'd have to do to catch the output anyway. Also if you want to look me into something else feel free - it's very reproducable here. Wish I could be more help here, but with all the RCU and micro optimizations in the path lookup code I can't claim to really understand it anymore. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs