Re: Reserve space for specific thin logical volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Zdenek Kabelac schreef op 13-09-2017 21:17:

Please if you can show the case where the current upstream thinLV
fails and you lose your data - we can finally start to fix something.

Hum, I can only say "I owe you one" on this.

I mean to say it will have to wait, but I hope to get to this at some point.

I'm still unsure what problem you want to get resolved from pretty
small group of people around dm/lvm2 - do you want from us to rework
kernel page-cache ?

I'm simply still confused what kind action you expect...

Be specific with real world example.

I think Brassow Jonathan's idea is very good to begin with (thank you sir ;-)).

I get that you say that kernel space solution is impossible to implement (apart from not crashing the system, and I get that you say that this is no longer the case) because checking several things would prolong execution paths considerably, is what you say.

And I realize that any such thing would need asynchronous checking and updating some values and then execution paths that need to check for such things which I guess could indeed by rather expensive to actually execute.

I mean the only real kernel experience I have was trying to dabble with filename_lookup and path_lookupat or whatever it was called. I mean inode path lookups, which is a bit of the same thing. And indeed even a single extra check would have incurred a performance overhead.

I mean the code to begin with differentiated between fast lookup and slow lookup and all of that.

And particularly the fast lookup was not something you'd want to mess with, etc.

But I absolutely have no issue to begin with I want to say with asynchronous 'intervention' even if it is not byte accurate, as you say in the other email.

And I get that you prefer user-space tools doing the thing...

And you say there that this information is hard to mine.

And that the "thin_ls" tool does that.

It's just that I don't want it to be 'random' and depending on your particular random sysadmin doing the right thing in isolation of all other random sysadmins having to do the right thing all in isolation of each other all writing the same code.

At the very least if you recognise your responsibility, which you are doing now, we can have a bit of a framework that is delivered by upstream LVM so the thing comes out more "fully fleshed" and sysadmins have less work to do, even if they still have to customize the scripts or anything.

Most ideal thing would definitely be something you "set up" and then the thing takes care of itself, ie. you only have to input some values and constraints.

But intervention in forms of "fsfreeze" or whatever is very personal, I get that.

And I get that previously auto-unmounting also did not really solve issues for everyone.

So a general interventionalist policy that is going to work for everyone is hard to get.

So the only thing that could work for everyone is if there is actually a block on new allocations. If that is not possible, then indeed I agree that a "one size fits all" approach is hardly possible.

Intervention is system-specific.

Regardless at least it should be easy to ensure that some constraints are enforced, that's all I'm asking.

Regards, (I'll respond further in the other email).

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux