Re: Failing XFS memory allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 03/23/2016 03:10 PM, Brian Foster wrote:
> On Wed, Mar 23, 2016 at 02:56:25PM +0200, Nikolay Borisov wrote:
>>
>>
>> On 03/23/2016 02:43 PM, Brian Foster wrote:
>>> On Wed, Mar 23, 2016 at 12:15:42PM +0200, Nikolay Borisov wrote:
> ...
>>> It looks like it's working to add a new extent to the in-core extent
>>> list. If this is the stack associated with the warning message (combined
>>> with the large alloc size), I wonder if there's a fragmentation issue on
>>> the file leading to an excessive number of extents.
>>
>> Yes this is the stack trace associated.
>>
>>>
>>> What does 'xfs_bmap -v /storage/loop/file1' show?
>>
>> It spews a lot of stuff but here is a summary, more detailed info can be
>> provided if you need it:
>>
>> xfs_bmap -v /storage/loop/file1 | wc -l
>> 900908
>> xfs_bmap -v /storage/loop/file1 | grep -c hole
>> 94568
>>
>> Also, what would constitute an "excessive number of extents"?
>>
> 
> I'm not sure where one would draw the line tbh, it's just a matter of
> having too many extents to the point that it causes problems in terms of
> performance (i.e., reading/modifying the extent list) or such as the
> allocation problem you're running into. As it is, XFS maintains the full
> extent list for an active inode in memory, so that's 800k+ extents that
> it's looking for memory for.

I saw in the comments that this problem has already been identified and
a possible solution would be to add another level of indirection. Also,
can you confirm that my understanding of the operation of the
indirection array is correct in that each entry in the indirection array
xfs_ext_irec is responsible for 256 extents. (the er_extbuf is
PAGE_SIZE/4kb and an extent is 16 bytes which results in 256 extents)

> 
> It looks like that is your problem here. 800k or so extents over 878G
> looks to be about 1MB per extent. Are you using extent size hints? One
> option that might prevent this is to use a larger extent size hint
> value. Another might be to preallocate the entire file up front with
> fallocate. You'd probably have to experiment with what option or value
> works best for your workload.

By preallocating with fallocate you mean using fallocate with
FALLOC_FL_ZERO_RANGE and not FALLOC_FL_PUNCH_HOLE, right? Because as it
stands now the file does have holes, which presumably are being filled
and in order to be filled an extent has to be allocated which caused the
issue?  Am I right in this reasoning?

Currently I'm not using extents size hint but will look into that, also
if the extent size hint is say 4mb, wouldn't that cause a fairly serious
loss of space, provided that the writes are smaller than 4mb. Would XFS
try to perform some sort of extent coalescing or something else? I'm not
an FS developer but my understanding is that with a 4mb extent size,
whenever a new write occurs even if it's 256kb a new 4mb extent would be
allocated, no?

And a final question - when i printed the contents of the inode with
xfs_db I get core.nextents = 972564 whereas invoking the xfs_bmap | wc
-l on the file always gives varying numbers?

Thanks a lot for taking the time to reply.




_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux