Dear List! We are using (user serviceable) snapshots and recently, we needed to restore a snapshot for a whole volume. The effect is now, that gluster uses a brick path to the mount of the lvm snapshot.
Brick xyz:/run/gluster/snaps/d44afa00d24e4a249de440dc13bfe42c/brick1/gfs_home_brick1 The original brick mount point (/data/glusterfs_home/gfs_home_brick1) is stuck on the original logical volume, which is now completely out of date, because further work is going to the other logical volume (it seems). In the meantime, a third brick has been added to the (replicated) volume, having the “correct” path. Is there a preferred/easy/clean way to resolve this, without breaking anything? Best
|
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users