Large file with a lot of extents: Memory allocation failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

this came up in the mailing list before and I think devs are aware of
this issue. I just wanted to add a data point to this issue. Stack
traces point to xfs_iext_realloc_indirect same as e.g.
http://oss.sgi.com/archives/xfs/2015-07/msg00075.html .
I created a large file (2TB) then proceeded to punch a lot of holes into
it. After a while XFS hangs with e.g.

 > possible memory allocation deadlock size 131088 in kmem_realloc
(mode:0x2400240)

Kernel version is mainline 4.9.22. Memory is horribly fragmented by
other things (user space program that has fluctuating memory allocs,
bcache, blk-mq, btrfs).

Where I guess this issue would be a possible blocker would be with my
backup software UrBackup. With ZFS/btrfs it can save disk images in a
raw file, then to create an incremental image backup it snapshots that
and changes it to reflect the current state, including punching holes
into the raw file where there is unused file system space.
With reflinks now in XFS, it would be possible to add XFS to the list of
file systems able to perform this kind of image backup, only I fear it
would then hit the memory allocation deadlock issue when free space
changes a lot (which it often does) and the volume is larger on the client.


Regards,
Martin Raiber


--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux