Hi,
I have setup a Debian 10 VM to understand the issue.
root@sympa:~# uname -a
Linux sympa 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64
GNU/Linux
root@sympa:~# lvdisplay /dev/sympa-vg/swap_1
--- Logical volume ---
LV Path /dev/sympa-vg/swap_1
LV Name swap_1
VG Name sympa-vg
LV UUID 1b59OW-M2yW-PuI1-QN5t-pN0w-6Akl-AnJnNh
LV Write Access read/write
LV Creation host, time sympa, 2019-07-10 11:06:42 +0200
LV Status available
# open 2
LV Size <1,99 GiB
Current LE 509
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1
root@sympa:~# lsof | grep "254,1"
root@sympa:~# dmsetup info
Name: sympa--vg-swap_1
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 2
Event number: 0
Major, minor: 254, 1
Number of targets: 1
UUID: LVM-Imuz5YW2OXfsyO2ChOPPLI5flib9YlLi1b59OWM2yWPuI1QN5tpN0w6AklAnJnNh
Name: sympa--vg-root
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 254, 0
Number of targets: 1
UUID: LVM-Imuz5YW2OXfsyO2ChOPPLI5flib9YlLiJMF1vN1e6TKmfV1rT0fvWJnpViu7BMw5
root@sympa:~# lsof | grep swap_1
root@sympa:~#
Could someone pls give me some hints to verify the "open" value ?
Thanks
On 10/07/2019 15:37, Simon ELBAZ wrote:
Thank you very much Zdenek.
I will suggest the customer a kernel update.
On 10/07/2019 15:32, Zdenek Kabelac wrote:
Dne 10. 07. 19 v 13:54 Simon ELBAZ napsal(a):
Hi Zdenek,
Thanks for your feedback.
The kernel version is:
[root@panoramix ~]# uname -a
Linux panoramix.ch-perrens.fr 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue
Dec 15 21:19:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Hi
So this is really a very historical kernel - released ~10 years back.
I'm afraid none is going to look out for the reasoning of any kernel
bug....
It might be interesting to see if you get any reproducer to give you
hints
how to avoid this happen (at least not easily).
Jul 2 03:08:10 panoramix LVM(pri_ISCSIVG0_vg_obm)[18618]: INFO:
Retry deactivating volume group vg_obm
This is why I am trying to understand how the field is computed.
Simple rule applied here: DM devices in use (open_count > 0) cannot
be deactivated.
Occasionally there were 'race events' with udev - where device
which should have been otherwise unused has been asynchronously
opened by udev scanning rules - but that's like is not your case.
As you seems to have device open count higher permanently -
So if you are sure there is no running 'APP' after umount, that
keeps device open - it could be likely be some very old bug in kernel
with very high probably 99.99999% of being fixed ;)
Regards
Zdenek
--
Simon Elbaz
@Linagora
Mob: +33 (0) 6 38 99 18 34
Tour Franklin 31ème étage
100/101 Quartier Boieldieu
92042 La Défense
FRANCE
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/