RE: mkfs.xfs -n size=65536

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To remedy the fragmented files in our production systems, can I run the xfs_fsr utility to de-fragment the files?

Thanks,
-Al

-----Original Message-----
From: Dave Chinner [mailto:david@xxxxxxxxxxxxx] 
Sent: Monday, October 12, 2015 8:33 PM
To: Al Lau (alau2)
Cc: xfs@xxxxxxxxxxx
Subject: Re: mkfs.xfs -n size=65536

On Tue, Oct 13, 2015 at 01:39:13AM +0000, Al Lau (alau2) wrote:
> Have a 3 TB file.  Logically divide into 1024 sections.  Each section 
> has a process doing dd to a randomly selected 4K block in a loop.  
> Will this test case eventually cause the extent fragmentation that 
> lead to the kmem_alloc message?
> 
> dd if=/var/kmem_alloc/junk of=/var/kmem_alloc/fragmented obs=4096 
> bs=4096 count=1 seek=604885543 conv=fsync,notrunc oflag=direct

If you were loking for a recipe to massively fragment a file, then you found it. And, yes, when you start to get millions of extents in a file such as this workload will cause, you'll start having memory allocation problems.

But I don't think that sets the GFP_ZERO flag anywhere, so that's not necessarily where the memroy shortage is coming from. I just committed some changes to the dev tree that allow for more detailed information from this allocation error point to be obtained - perhaps if woul dbe worthwhile trying a kernel build form the current for-next tree and turning the error level up to 11?

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux