Dne 17. 08. 22 v 15:41 Martin Wilck napsal(a):
On Wed, 2022-08-17 at 14:54 +0200, Zdenek Kabelac wrote:
Dne 17. 08. 22 v 14:39 Martin Wilck napsal(a):
Let's make clear we are very well aware of all the constrains
associated with
udev rule logic (and we tried quite hard to minimize impact -
however udevd
developers kind of 'misunderstood' how badly they will be impacting
system's
performance with the existing watch rule logic - and the story kind
of
'continues' with 'systemd's' & dBus services unfortunatelly...
I dimly remember you dislike udev ;-)
Well it's not 'a dislike' from my side - but the architecture alone is just
missing in many areas...
Dave is a complete disliker of udev & systemd all together :)....
I like the general idea of the udev watch. It is the magic that causes
newly created partitions to magically appear in the system, which is
Tragedy of design comes from the plain fact that there are only 'very
occasional' consumers of all these 'collected' data - but gathering all the
info and keeping all of it 'up-to-date' is getting very very expensive and can
basically 'neutralize' a lot of your CPU if you have too many resources to
watch and keep update.....
very convenient for users and wouldn't work otherwise. I can see that
it might be inappropriate for LVM PVs. We can discuss changing the
rules such that the watch is disabled for LVM devices (both PV and LV).
It's really not fixable as is - since of the complete lack of 'error' handling
of device in udev DB (i.e. duplicate devices...., various frozen devices...)
There is on going 'SID' project - that might push the logic somewhat further,
but existing 'device' support logic as is today is unfortunate 'trace' of how
the design should not have been made - and since all 'original' programmers
left the project long time ago - it's non-trivial to push things forward.
I don't claim to overlook all possible side effects, but it might be
worth a try. It would mean that newly created LVs, LV size changes etc.
would not be visible in the system immediately. I suppose you could
work around that in the LVM tools by triggering change events after
operations like lvcreate.
We just hope the SID will make some progress (although probably small one at
the beginning).
However let's focus on 'pvmove' as it is potentially very lengthy
operation -
so it's not feasible to keep the VG locked/blocked across an
operation which
might take even days with slower storage and big moved sizes (write
access/lock disables all readers...)
So these close-after-write operations are caused by locking/unlocking
the PVs?
Note: We were observing that watch events were triggered every 30s, for
every PV, simultaneously. (@Heming correct me if I'mn wrong here).
That's why we would like to see 'metadata' and also check if the issue is
appearing on the latest version of lvm2.
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/