Re: trouble with generic/081

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 9.1.2017 v 14:39 Christoph Hellwig napsal(a):
On Fri, Jan 06, 2017 at 09:46:00AM +1100, Dave Chinner wrote:
And my 2c worth on the "lvm unmounting filesystems on error" - stop
it, now. It's the wrong thing to do, and it makes it impossible for
filesystems to handle the error and recover gracefully when
possible.

It's causing way more harm than it helps due to the namespace
implications.  And we'll have to fix it in the FS for real because
other storage can run out of space as well.



Hi

I can be blind but I still miss some simple things here -


lvm2 will initiate lazy umount of ALL thin devices from a thin-pool
when it gets about 95% fullness  (so it's a bit sooner then 100%
with still some 5% 'free-space'.

This should mostly trigger flushing as much dirty stuff on disk - possibly causing even 100% fullness (which is not wanted but unavoidable with todays RAM and sizes).

So now - filesystem gets into position of  having ENOSP errors.

I'd expect  XFS switches off  (xfs_shutdown) on this condition.

So it should get into EXACT same state as is advocated here - do nothing case - without invoking 'lazy umount' but significantly later
(so IMHO causing possibly more damage to a user).

So is  'XFS  incapable to handle lazy umount at all in such conditions ?

I really would like to first understand why there is such big hallo effect around this - since in my eyes - lvm2 was designed to operated within threshold bounds - if the thin-pool volume is out of configured bounds, lvm2 is not capable to deliver (resize) more space and it's trying to simply stop further operations - so disruption of work was 'intended'.


What I'm getting from this thread is - this is unwanted and user wish to continue use thin-pool further with possible damages to their data set - and
lvm2 WILL provide configurable settings for this.

I'm not disputing this part AT ALL  - just to make it really clear.


But could anyone from XFS specify - why umount is causing some 'more' damage, then no umount at all ?

With my naive thinking - I'd just assume I hit 'xfs_shutdown' rather earlier.
Is xfs refusing to umount 'erroring' device ?


Also please note - since its mentioned namespaces here - if this is something Docker related - be aware for Docker thins - lvm2 for awhile is leaving such volumes intact already (-> no umount for docker thins volume).


Regards

Zdenek



--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux