Re: Extreme fragmentation ho!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 29, 2020 at 09:06:22AM +1100, Dave Chinner wrote:
On Tue, Dec 22, 2020 at 08:54:53AM +1100, Chris Dunlop wrote:
The file is sitting on XFS on LV on a raid6 comprising 6 x 5400 RPM HDD:

... probably not that unreasonable for pretty much the slowest
storage configuration you can possibly come up with for small,
metadata write intensive workloads.

[ Chris grimaces and glances over at the 8+3 erasure-encoded ceph rbd sitting like a pitch drop experiment in the corner. ]

Speaking of slow storage and metadata write intensive workloads, what's the reason reflinks with a realtime device isn't supported? That was one approach I wanted to try, to get the metadata ops running on a small fast storage with the bulk data sitting on big slow bulk storage. But:

# mkfs.xfs -m reflink=1 -d rtinherit=1 -r rtdev=/dev/fast /dev/slow
reflink not supported with realtime devices

My naive thought was a reflink was probably "just" a block range referenced from multiple places, and probably a refcount somewhere. It seems like it should be possible to have the range, references and refcount sitting on the fast storage pointing to the actual data blocks on the slow storage.

What is the easiest way to recreate a similarly (or even better,
identically) fragmented file?

Just script xfs_io to reflink random bits and bobs from other files
into a larger file?

Thanks - that did it.

Cheers,

Chris



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux