Strahil Nikolov:
I’ve never had such situation and I don’t recall someone sharing something similar.
That's strange, it is really easy to reproduce. This is from a fresh test environment:
summary:
- There is one snapshot present.
- On one node glusterd is stopped.
- During the stop, one snapshot is deleted.
- The node is brought up again
- On that node there is an orphaned snapshot
detailed version:
# on node 1:
root@gl1:~# cat /etc/debian_version
11.7
root@gl1:~# gluster --version
glusterfs 10.4
root@gl1:~# gluster volume info
Volume Name: glvol_samba
Type: Replicate
Volume ID: 91cb059e-10e4-4439-92ea-001065652749
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gl1:/data/glusterfs/glvol_samba/brick0/brick
Brick2: gl2:/data/glusterfs/glvol_samba/brick0/brick
Brick3: gl3:/data/glusterfs/glvol_samba/brick0/brick
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
features.barrier: disable
root@gl1:~# gluster snapshot list
snaps_GMT-2023.08.15-13.05.28
# on node 3:
root@gl3:~# systemctl stop glusterd.service
# on node 1:
root@gl1:~# gluster snapshot deactivate snaps_GMT-2023.08.15-13.05.28
Deactivating snap will make its data inaccessible. Do you want to continue? (y/n) y
Snapshot deactivate: snaps_GMT-2023.08.15-13.05.28: Snap deactivated successfully
root@gl1:~# gluster snapshot delete snaps_GMT-2023.08.15-13.05.28
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snaps_GMT-2023.08.15-13.05.28: snap removed successfully
root@gl1:~# gluster snapshot list
No snapshots present
# on node 3:
root@gl3:~# systemctl start glusterd.service
root@gl3:~# gluster snapshot list
snaps_GMT-2023.08.15-13.05.28
root@gl3:~# gluster snapshot deactivate snaps_GMT-2023.08.15-13.05.28
Deactivating snap will make its data inaccessible. Do you want to continue? (y/n) y
snapshot deactivate: failed: Pre Validation failed on gl1.ad.arc.de. Snapshot (snaps_GMT-2023.08.15-13.05.28) does not exist.
Pre Validation failed on gl2. Snapshot (snaps_GMT-2023.08.15-13.05.28) does not exist.
Snapshot command failed
root@gl3:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
669cbc14fa7542acafb2995666284583_0 vg_brick0 Vwi-aotz-- 15,00g tp_brick0 lv_brick0 0,08
lv_brick0 vg_brick0 Vwi-aotz-- 15,00g tp_brick0 0,08
[lvol0_pmspare] vg_brick0 ewi------- 20,00m
tp_brick0 vg_brick0 twi-aotz-- 18,00g 0,12 10,57
[tp_brick0_tdata] vg_brick0 Twi-ao---- 18,00g
[tp_brick0_tmeta] vg_brick0 ewi-ao---- 20,00m
Would it be dangerous to just delete following items on node 3 while gluster is down:
- the orphaned directories in /var/lib/glusterd/snaps/
- the orphaned lvm, here 669cbc14fa7542acafb2995666284583_0
Or is there a self-heal command?
Regards
Sebastian
Am 10.08.2023 um 20:33 schrieb Strahil
Nikolov:
I’ve never had such situation and I don’t recall someone sharing something similar.
Most probably it’s easier to remove the node from the TSP and re-add it.Of course , test the case in VMs just to validate that it’s possible to add a mode to a cluster with snapshots.
I have a vague feeling that you will need to delete all snapshots.
Best Regards,Strahil Nikolov
On Thursday, August 10, 2023, 4:36 AM, Sebastian Neustein <sebastian.neustein@xxxxxxxxxxxxx> wrote:
________-- Sebastian Neustein Airport Research Center GmbH Bismarckstraße 61 52066 Aachen Germany Phone: +49 241 16843-23 Fax: +49 241 16843-19 e-mail: sebastian.neustein@xxxxxxxxxxxxx Website: http://www.airport-consultants.com Register Court: Amtsgericht Aachen HRB 7313 Ust-Id-No.: DE196450052 Managing Director: Dipl.-Ing. Tom Alexander Heuer
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users