Hi Joe,
Gluster volumes are made of brick processes. Each brick process is associated with a particular brick directory. In case of the original volume, the brick processes run on the brick directory provided during the volume creation. When a snapshot of that volume is taken, it creates a gluster snapshot volume, which has it's own bricks which run on directories that resemble the one you mentioned(/run/gluster/snaps/11efcc850133419991c4614b7cb7189c/brick3/brick). This snapshot brick directory is where the lvm snapshot of the original brick's lvm is mounted. On performing a snapshot restore, the volume goes offline because we update the volume info file's of the original volume, to make it point to the snapshot bricks instead of the original bricks. We also remove the snapshot's info files. As a result when the volume is then started, after restore it points to the snapshot bricks and the user gets the data as it was when the snapshot was taken. By principle, we do not touch the user created directory as we don't claim "jurisdiction" over it. Hence you can still see the older data in those backend drectories even after the restore. It is the user's onus, as in what to do with the original directory or data. This behavior is inherited from the behavior of volume delete, where we take the same precautions to make sure that we don't implicitly delete the user created directories and data. However, as after you have restored the volume to a snapshot , it is already pointing to snapshot bricks (created by gluster), any subsequent restores henceforth will remove the snapshot bricks that are currently a part of the volume as these snapshot bricks are created by gluster and not by the user. Thanks. Regards, Avra On 03/04/2017 07:21 PM, Joseph Lorenzini wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users