lvs activation columns are confusing for shared volume groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

in 2018 there was a change [0] that removed clvmd support from the LV
active columns of the `lvs` tool. The column `lv_active_remotely` is
always unset/empty, and the columns `lv_active_locally` and
`lv_active_exclusively` are all simply coupled to the activation state
of the LV. For a local VG this all fine and dandy, but it gets confusing
for shared VGs. Take this example of a 2-node shared VG setup using
lvmlockd+sanlock:


```
root@node1:~# lvm version
  LVM version:     2.03.17(2)-git (2022-05-18)
  Library version: 1.02.187-git (2022-05-18)
  Driver version:  4.43.0
  Configuration:   ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-option-checking --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --with-udev-prefix=/ --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-dmeventd --enable-editline --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-d
bus --enable-pkgconfig --enable-udev_rules --enable-udev_sync --disable-readline

root@node1:~# vgchange --lock-start
root@node1:~# lvchange -aey 'my-shared-vg/node1-exclusive-lv'
root@node1:~# lvchange -asy 'my-shared-vg/shared-but-only-node1-lv'
root@node1:~# lvchange -asy 'my-shared-vg/shared-on-both-lv'

root@node2:~# vgchange --lock-start
root@node2:~# lvchange -aey 'my-shared-vg/node2-exclusive-lv'
root@node2:~# lvchange -asy 'my-shared-vg/shared-but-only-node2-lv'
root@node2:~# lvchange -asy 'my-shared-vg/shared-on-both-lv'

root@node1:~# lvs -a -o 'vg_name,vg_shared,lv_name,lv_uuid,lv_active,lv_active_locally,lv_active_remotely,lv_active_exclusively,lv_lockargs'
  VG           Shared  LV                       LV UUID                                Active ActLocal       ActRemote  ActExcl            LLockArgs
  my-shared-vg  shared [lvmlock]                2Iuwpm-TA9H-0JSA-EmeS-PaZX-oCzU-jo3X7h active active locally            active exclusively
  my-shared-vg  shared node1-exclusive-lv       TDFv2h-gYnZ-dkj4-kAVx-tkCR-gCvf-fXgDK7 active active locally            active exclusively 1.0.0:70254592
  my-shared-vg  shared node2-exclusive-lv       Pjhy72-Ah71-lPac-mC1f-FH4h-yXYx-JoplUH                                                     1.0.0:71303168
  my-shared-vg  shared shared-but-only-node1-lv qPVTYt-YGP0-X7kr-afGJ-8PyG-vS1p-FGWQLf active active locally            active exclusively 1.0.0:72351744
  my-shared-vg  shared shared-but-only-node2-lv 0mfufn-JJDI-IoVm-dyUf-soWa-Wy4G-TcGnbe                                                     1.0.0:73400320
  my-shared-vg  shared shared-on-both-lv        DPsQl2-6pqd-ZUTM-plv6-qR4w-VSZv-6mRwBr active active locally            active exclusively 1.0.0:74448896
  my-shared-vg  shared unused-lv                96X21r-Zm2Z-O2gp-hWVw-ryCb-y6JV-QYQOBk                                                     1.0.0:75497472
root@node1:~# lvmlockctl --info --dump | grep 'type=lv'
  info=r name=TDFv2h-gYnZ-dkj4-kAVx-tkCR-gCvf-fXgDK7 type=lv mode=ex sh_count=0 version=0
  info=r name=Pjhy72-Ah71-lPac-mC1f-FH4h-yXYx-JoplUH type=lv mode=un sh_count=0 version=0
  info=r name=qPVTYt-YGP0-X7kr-afGJ-8PyG-vS1p-FGWQLf type=lv mode=sh sh_count=1 version=0
  info=r name=0mfufn-JJDI-IoVm-dyUf-soWa-Wy4G-TcGnbe type=lv mode=un sh_count=0 version=0
  info=r name=DPsQl2-6pqd-ZUTM-plv6-qR4w-VSZv-6mRwBr type=lv mode=sh sh_count=1 version=0
  info=r name=96X21r-Zm2Z-O2gp-hWVw-ryCb-y6JV-QYQOBk type=lv mode=un sh_count=0 version=0

root@node2:~# lvs -a -o 'vg_name,vg_shared,lv_name,lv_uuid,lv_active,lv_active_locally,lv_active_remotely,lv_active_exclusively,lv_lockargs'
  VG           Shared  LV                       LV UUID                                Active ActLocal       ActRemote  ActExcl            LLockArgs
  my-shared-vg  shared [lvmlock]                2Iuwpm-TA9H-0JSA-EmeS-PaZX-oCzU-jo3X7h active active locally            active exclusively
  my-shared-vg  shared node1-exclusive-lv       TDFv2h-gYnZ-dkj4-kAVx-tkCR-gCvf-fXgDK7                                                     1.0.0:70254592
  my-shared-vg  shared node2-exclusive-lv       Pjhy72-Ah71-lPac-mC1f-FH4h-yXYx-JoplUH active active locally            active exclusively 1.0.0:71303168
  my-shared-vg  shared shared-but-only-node1-lv qPVTYt-YGP0-X7kr-afGJ-8PyG-vS1p-FGWQLf                                                     1.0.0:72351744
  my-shared-vg  shared shared-but-only-node2-lv 0mfufn-JJDI-IoVm-dyUf-soWa-Wy4G-TcGnbe active active locally            active exclusively 1.0.0:73400320
  my-shared-vg  shared shared-on-both-lv        DPsQl2-6pqd-ZUTM-plv6-qR4w-VSZv-6mRwBr active active locally            active exclusively 1.0.0:74448896
  my-shared-vg  shared unused-lv                96X21r-Zm2Z-O2gp-hWVw-ryCb-y6JV-QYQOBk                                                     1.0.0:75497472
root@node2:~# lvmlockctl --info --dump | grep 'type=lv'
  info=r name=TDFv2h-gYnZ-dkj4-kAVx-tkCR-gCvf-fXgDK7 type=lv mode=un sh_count=0 version=0
  info=r name=Pjhy72-Ah71-lPac-mC1f-FH4h-yXYx-JoplUH type=lv mode=ex sh_count=0 version=0
  info=r name=qPVTYt-YGP0-X7kr-afGJ-8PyG-vS1p-FGWQLf type=lv mode=un sh_count=0 version=0
  info=r name=0mfufn-JJDI-IoVm-dyUf-soWa-Wy4G-TcGnbe type=lv mode=sh sh_count=1 version=0
  info=r name=DPsQl2-6pqd-ZUTM-plv6-qR4w-VSZv-6mRwBr type=lv mode=sh sh_count=1 version=0
  info=r name=96X21r-Zm2Z-O2gp-hWVw-ryCb-y6JV-QYQOBk type=lv mode=un sh_count=0 version=0
```

As soon as a LV is active, `lvs` will report it as "active exclusively"
regardless of whether it was activated in exclusive or shared mode. To
get accurate information, you have to check the lock-mode on that LV
using `lvmlockctl --info`. And as noted above, it is never reported as
"active remotely" even if it is. It took me a while to realise this.

I am willing to spend some time on this to develop a fix, but I am
unsure in what direction to move.
If `lvmlockctl` is supposed to be the source of truth here, a
deprecation note in the docs for these columns would be appropriate. Or
more drastically, remove these columns altogether. The downside is the
output of `lvmlockctl` in its current state isn't very human-readable,
especially compared to lvs.
Another way would be to integrate `lvs` with lvmlockd such that the
exclusivity is properly reported for shared VGs: It is exclusive only if
it is in a local VG, or in a shared VG and has a exclusive lock. As you
can see in the example above, it is impossible to ascertain if a LV is
remotely active, because the "shared count" reported by `lvmlockctl`
never exceeds 1 even if that LV is active on 2 nodes (at least with
sanlock). Depending on how that behaves with dlm, the column could be
deprecated/removed or show something akin to "unknown".

I am very much interested in the opinions of the developers and more
knowledgeable and experienced users.

Full disclaimer: I have only recently started using non-local LVM
setups, and so far only used sanlock; no experience whatsoever with dlm
or clvmd.


[0] https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=18259d5559307f2708e12b9923988319e46572df

----
Greetings

Corubba
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux