Hi Jose,
By switching into pure distribute volume you will lose availability if something goes bad.
I am guessing you have a nX2 volume.
If you want to preserve one copy of the data in all the distributes, you can do that by decreasing the replica count in the remove-brick operation.
If you have any inconsistency, heal them first using the "gluster volume heal <volname>" command and wait till the
"gluster volume heal <volname> info" output becomes zero, before removing the bricks, so that you will have the correct data.
If you do not want to preserve the data then you can directly remove the bricks.
Even after removing the bricks the data will be present in the backend of the removed bricks. You have to manually erase them (both data and .glusterfs folder).
See [1] for more details on remove-brick.
HTH,
Karthik
On Thu, Apr 5, 2018 at 8:17 PM, Jose Sanchez <josesanc@xxxxxxxxxxxx> wrote:
We have a Gluster setup with 2 nodes (distributed replication) and we would like to switch it to the distributed mode. I know the data is duplicated between those nodes, what is the proper way of switching it to a distributed, we would like to double or gain the storage space on our gluster storage node. what happens with the data, do i need to erase one of the nodes?Jose--------------------------------- Jose SanchezSystems/Network AnalystCenter of Advanced Research Computing1601 Central Ave.MSC 01 1190Albuquerque, NM 87131-0001575.636.4232
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users