Re: Ext4 and xfs problems in dm-thin on allocation and discard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/19/12 22:06, Dave Chinner wrote:
On Tue, Jun 19, 2012 at 02:48:59PM -0400, Mike Snitzer wrote:
On Tue, Jun 19 2012 at 10:44am -0400,
Mike Snitzer<snitzer@xxxxxxxxxx>  wrote:

On Tue, Jun 19 2012 at  9:52am -0400,
Spelic<spelic@xxxxxxxxxxxxx>  wrote:

I do not know what is the mechanism for which xfs cannot unmap
blocks from dm-thin, but it really can't.
If anyone has dm-thin installed he can try. This is 100%
reproducible for me.
I was initially surprised by this considering the thinp-test-suite does
test a compilebench workload against xfs and ext4 using online discard
(-o discard).

But I just modified that test to use a thin-pool with 'ignore_discard'
and the test still passed on both ext4 and xfs.

So there is more work needed in the thinp-test-suite to use blktrace
hooks to verify that discards are occuring when the compilebench
generated files are removed.

I'll work through that and report back.
blktrace shows discards for both xfs and ext4.

But in general xfs is issuing discards with much smaller extents than
ext4 does, e.g.:
THat's normal when you use -o discard - XFS sends extremely
fine-grained discards as the have to be issued during the checkpoint
commit that frees the extent. Hence they can't be aggregated like is
done in ext4.

As it is, no-one really should be using -o discard - it is extremely
inefficient compared to a background fstrim run given that discards
are unqueued, blocking IOs. It's just a bad idea until the lower
layers get fixed to allow asynchronous, vectored discards and SATA
supports queued discards...


Could it be that the thin blocksize is larger than the discard granularity by xfs so nothing ever gets unmapped? I have tried thin pools with the default blocksize (64k afair with lvm2) and 1MB. HOWEVER I also have tried fstrim on xfs, and that is also not capable to unmap things from the dm-thin.
What is the granularity with fstrim in xfs?
Sorry I can't access the machine right now; maybe tomorrow, or in the weekend.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux