Yet, in your case you don't have enough space. I guess you can try on 2 VMs to simulate the failure, rebuild and then forcefully re-add the old brick. It might work, it might not ... at least it's worth trying.
Best Regards,
Strahil Nikolov
On Thu, Aug 26, 2021 at 15:27, Taste-Of-IT<kontakt@xxxxxxxxxxxxxx> wrote:Hi,
what do you mean? Copy the data from dead node to runnig node and than add the new installed node to existing vol1, after that running rebalance? If so, this is not possible, because node1 has not enough free space to take all from node2.
thx
Am 22.08.2021 18:35:33, schrieb Strahil Nikolov:
> Hi,
>
> the best way is to copy the files over the FUSE mount and later add the bricks and rebalance.
> Best Regards,Strahil Nikolov
>
> Sent from Yahoo Mail on Android
>
> On Thu, Aug 19, 2021 at 23:04, Taste-Of-IT<kontakt@xxxxxxxxxxxxxx> wrote: Hello,
>
> i have two nodes with a distributed volume. OS is on a separate disk which crashed on one node. However i can reinstall the os and the raid6 which is used vor the distributed volume was rebuild. The question now is, how to re-add the brick with the volume back to the existing old volume.
>
> If this is not possible what is with this idea: i create a new vol2 with distributed over both nodes and move the files direkt from directory to new volume via nfs-ganesha share?!
>
> thx
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users