Re: Snapshot behavior on classic LVM vs ThinLVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 5.3.2018 v 10:42 Gionatan Danti napsal(a):
Il 04-03-2018 21:53 Zdenek Kabelac ha scritto:
On the other hand all common filesystem in linux were always written
to work on a device where the space is simply always there. So all
core algorithms simple never counted with something like
'thin-provisioning' - this is almost 'fine' since thin-provisioning
should be almost invisible - but the problem starts to be visible on
this over-provisioned conditions.

Unfortunately majority of filesystem never really tested well all
those 'weird' conditions which are suddenly easy to trigger with
thin-pool, but likely almost never happens on real hdd....

Hi Zdenek, I'm a little confused by that statement.
Sure, it is 100% true for EXT3/4-based filesystem; however, asking on XFS mailing list about that, I get the definive answer that XFS was adapted to cope well with thin provisioning ages ago. Is it the case?

Yes - it has been updated/improved/fixed - and I've already given you a link where you can configure the behavior of XFS when i.e. device reports ENOSPC to the filesystem.

What need to be understood here is - filesystem were not originally designed
to ever see such kind of errors - where you simply created filesystem in past, the space was meant to be there all the time.

Anyway, a more direct question: what prevented the device mapper team to implement a full-read-only/fail-all-writes target? I feel that *many* filesystem problems should be bypassed with full-read-only pools... Am I wrong?

Well complexity - it might look 'easy' to do on the first sight, but in reality it's impacting all hot/fast paths with number of checks and it would have rather dramatic performance impact.

The other case is, while for lots of filesystems it might look like best thing - it's not always true - so there are case where it's more desired
to have still working device with 'several' failing piece in it...

And 3rd moment is - it's unclear from kernel POV - where this 'full' pool moment actually happens - i.e. imagine running 'write' operation on one thin device and 'trim/discard' operation running on 2nd. device.

So it's been left on user-space to solve the case the best way -
i.e. user-space can initiate 'fstrim' itself when full pool case happens or get the space by number of other ways...

Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux