Re: Testing devices for discard support properly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 8, 2019 at 2:12 PM Ric Wheeler <ricwheeler@xxxxxxxxx> wrote:
>
> (stripped out the html junk, resending)
>
> On 5/8/19 1:25 PM, Martin K. Petersen wrote:
> >>> WRITE SAME also has an ANCHOR flag which provides a use case we
> >>> currently don't have fallocate plumbing for: Allocating blocks without
> >>> caring about their contents. I.e. the blocks described by the I/O are
> >>> locked down to prevent ENOSPC for future writes.
> >> Thanks for that detail! Sounds like ANCHOR in this case exposes
> >> whatever data is there (similar I suppose to normal block device
> >> behavior without discard for unused space)? Seems like it would be
> >> useful for virtually provisioned devices (enterprise arrays or
> >> something like dm-thin targets) more than normal SSD's?
> > It is typically used to pin down important areas to ensure one doesn't
> > get ENOSPC when writing journal or metadata. However, these are
> > typically the areas that we deliberately zero to ensure predictable
> > results. So I think the only case where anchoring makes much sense is on
> > devices that do zero detection and thus wouldn't actually provision N
> > blocks full of zeroes.
>
> This behavior at the block layer might also be interesting for something
> like the VDO device (compression/dedup make it near impossible to
> predict how much space is really there since it is content specific).
> Might be useful as a way to hint to VDO about how to give users a
> promise of "at least this much" space? If the content is good for
> compression or dedup, you would get more, but never see less.
>

In the case of VDO, writing zeroed blocks will not consume space, due
to the zero block elimination in VDO.  However, that also means that
it won't "reserve" space, either.  The WRITE SAME command with the
ANCHOR flag is SCSI, so it won't apply to a bio-based device.

Space savings also results in a write of N blocks having a fair chance
of the end result ultimately using "less than N" blocks, depending on
how much space savings can be achieved.  Likewise, a discard of N
blocks has a chance of reclaiming "less than N" blocks.


Thanks,

Bryan



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux