With xfs file system that has about 1 million files, would the default value for the directory structure be sufficient? We can remove the "-n size=<value>'" option and just use the default. Thanks, -Al -----Original Message----- From: Dave Chinner [mailto:david@xxxxxxxxxxxxx] Sent: Monday, October 12, 2015 5:23 PM To: Al Lau (alau2) Cc: xfs@xxxxxxxxxxx Subject: Re: mkfs.xfs -n size=65536 On Fri, Oct 09, 2015 at 10:40:00PM +0000, Al Lau (alau2) wrote: > I am looking for more details on the "-n size=65536" option in > mkfs.xfs. The question is the memory allocation this option > generates. The system is Redhat EL 7.0 (3.10.0-229.1.2.el7.x86_64). > > We have been getting this memory allocation deadlock message in the > /var/log/messages file. The file system is used for ceph OSD and it > has about 531894 files. So, if you only have half a million files being stored, why would you optimised the directory structure for tens of millions of files in a single directory? > Oct 6 07:11:09 abc-ceph1-xyz kernel: XFS: possible memory allocation > deadlock in kmem_alloc (mode:0x8250) mode = ___GFP_WAIT | ___GFP_IO | ___GFP_NOWARN | ___GFP_ZERO = GFP_NOFS | GFP_ZERO | GFP_NOWARN which means it's come through kmem_zalloc() and so is a heap allocation and hence probably quite small. Hence I doubt that has anything to do with the directory block size, as the directory blocks are allocated as single pages through a completely allocation different path and them virtually mapped... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs