Confirmed for gluster 7.9 in distributed-replicate and pure replicate volume. One of my 3 nodes died :( I removed all bricks from dead node and added to new node. I then started to add an arbiter volume as the distributed-replicate is configured for 2 replica 1 arbiter. I made sure to use the exact mount point and path and double / triple checked the bricks had the same file content in any given dir exactly as the running bricks it was about to be paired again. Then I used replace-brick command to replace dead-node:brick0 with new-node:brick0. Did this one by one for all bricks... It took a while to get the replacement-node up and running, so the cluster was still operational and in use. When finally moved all bricks self-heal-daemon started heal on several files. All worked out perfectly and with no downtime. Finally I detached the dead node. Done. A. Am Mittwoch, dem 09.06.2021 um 15:17 +0200 schrieb Diego Zuccato: > Il 05/06/2021 14:36, Zenon Panoussis ha scritto: > > > > What I'm really asking is: can I physically move a brick > > > from one server to another such as > > I can now answer my own question: yes, replica bricks are > > identical and can be physically moved or copied from one > > server to another. I have now done it a few times without > > any problems, though I made sure no healing was pending > > before the moves. > Well, if it's officially supported, that could be a really interesting > option to quickly scale big storage systems. > I'm thinking about our scenario: 3 servers, 36 12TB disks each. When > adding a new server (or another pair of servers, to keep an odd number) > it will require quite a lot of time to rebalance, with heavy > implications both on IB network and latency for the users. If we could > simply swap around some disks it could be a lot faster. > Have you documented the procedure you followed? > ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users