I have sytemTap installed on the system. This script is running in the background and hoping to catch the call-stack once the kemem_alloc message appears. # stap -o /var/kmem_alloc/kmem_alloc_bt.out backtrace.stp # cat backtrace.stp #! /usr/bin/env stap 2 3 probe module("xfs").function("xfs_err").call 4 { 5 print_backtrace(); 6 } 9 10 -----Original Message----- From: Dave Chinner [mailto:david@xxxxxxxxxxxxx] Sent: Monday, October 12, 2015 8:33 PM To: Al Lau (alau2) Cc: xfs@xxxxxxxxxxx Subject: Re: mkfs.xfs -n size=65536 On Tue, Oct 13, 2015 at 01:39:13AM +0000, Al Lau (alau2) wrote: > Have a 3 TB file. Logically divide into 1024 sections. Each section > has a process doing dd to a randomly selected 4K block in a loop. > Will this test case eventually cause the extent fragmentation that > lead to the kmem_alloc message? > > dd if=/var/kmem_alloc/junk of=/var/kmem_alloc/fragmented obs=4096 > bs=4096 count=1 seek=604885543 conv=fsync,notrunc oflag=direct If you were loking for a recipe to massively fragment a file, then you found it. And, yes, when you start to get millions of extents in a file such as this workload will cause, you'll start having memory allocation problems. But I don't think that sets the GFP_ZERO flag anywhere, so that's not necessarily where the memroy shortage is coming from. I just committed some changes to the dev tree that allow for more detailed information from this allocation error point to be obtained - perhaps if woul dbe worthwhile trying a kernel build form the current for-next tree and turning the error level up to 11? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs