Re: Reserve space for specific thin logical volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 11.9.2017 v 15:46 Xen napsal(a):
Zdenek Kabelac schreef op 11-09-2017 15:11:

Thin-provisioning is - about 'postponing'  available space to be
delivered in time

That is just one use case.

Many more people probably use it for other use case.

Which is fixed storage space and thin provisioning of available storage.

You order some work which cost $100.
You have just $30, but you know, you will have $90 next week -
so the work can start....

I know the typical use case that you advocate yes.

But it seems some users know it will cost $100, but they still think
the work could be done with $10 and it's will 'just' work the same....

No that's not what people want.

People want efficient usage of data without BTRFS, that's all.


What's wrong with BTRFS....

Either you want  fs & block layer tied together - that the btrfs/zfs approach

or you want

layered approach with separate 'fs' and block layer  (dm approach)

If you are advocating here to start mixing 'dm' with 'fs' layer, just
because you do not want to use 'btrfs' you'll probably not gain main traction here...


File system level failure can also not be critical because of using non-critical volume because LVM might fail even though filesystem does not fail or applications.

So my Laptop machine has 32G RAM - so you can have 60% of dirty-pages
those may raise pretty major 'provisioning' storm....

Yes but still system does not need to crash, right.

We  need to see EXACTLY which kind of crash do you mean.

If you are using some older kernel - then please upgrade first and
provide proper BZ case with reproducer.

BTW you can imagine an out-of-space thin-pool with thin volume and filesystem as a FS, where some writes ends with 'write-error'.


If you think there is OS system which keeps running uninterrupted, while number of writes ends with 'error' - show them :) - maybe we should stop working on Linux and switch to that (supposedly much better) different OS....


But we are talking about generic case here no on some individual sub-cases
where some limitation might give you the chance to rescue better...

But no one in his right mind currently runs /rootvolume out of thin pool and in pretty much all cases probably it is only used for data or for example of hosting virtual hosts/containers/virtualized environments/guests.

You can have different pools and you can use rootfs with thins to easily test i.e. system upgrades....

So Data use for thin volume is pretty much intended/common/standard use case.

Now maybe amount of people that will be able to have running system after data volumes overprovision/fill up/crash is limited.

Most thin-pool users are AWARE how to properly use it ;) lvm2 tries to minimize (data-lost) impact for misused thin-pools - but we can't spend too much effort there....

So what is important:
'commited' data (i.e. transaction database) are never lost
fsck after reboot should work.

If any of these 2 condition do not work - that's serious bug.

But if you advocate for continuing system use of out-of-space thin-pool - that I'd probably recommend start sending patches... as an lvm2 developer I'm not seeing this as best time investment but anyway...


However, from both a theoretical and practical standpoint being able to just shut down whatever services use those data volumes -- which is only possible

Are you aware there is just one single page cache shared for all devices
in your system ?


if base system is still running -- makes for far easier recovery than anything else, because how are you going to boot system reliably without using any of those data volumes? You need rescue mode etc.

Again do you have use-case where you see a crash of data mounted volume
on overfilled thin-pool ?

On my system - I could easily umount such volume after all 'write' requests
are timeouted (eventually use thin-pool with --errorwhenfull y for instant error reaction.

So please can you stop repeating overfilled thin-pool with thin LV data volume kills/crashes machine - unless you open BZ and prove otherwise - you will surely get 'fs' corruption but nothing like crashing OS can be observed on my boxes....

We are here really interested in upstream issues - not about missing bug fixes backports into every distribution and its every released version....


He might be able to recover his system if his system is still allowed to be logged into.

There is no problem with that as long as  /rootfs has consistently working fs!

Regards

Zdene

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux