I think you might be mixing the approach.
Basically you have 2 options:
Create brand new system, use different hostname and then add it to the TSP (Trusted Storage Pool).Then you need to remove the bricks(server + directory combination ) owned by the previous system and then add the new bricks .
Use the same hostname as the old system and restore from backup the gluster directories (both the one in '/etc' and in '/var/lib').If your gluster storage was also affected, you will need to recover the bricks from backup or remove the old ones from the volume and recreate them.
Can you describe what you have done so far (logically) ?
Best Regards,
Strahil Nikolov
On Mon, Jan 1, 2024 at 6:59, duluxoz<duluxoz@xxxxxxxxx> wrote:Hi All (and Happy New Year),We had to replace one of our Gluster Servers in our Trusted Pool thisweek (node1).The new server is now built, with empty folders for the bricks, peeredto the old Nodes (node2 & node3).We basically followed this guide:We are using the same/old IP address.So when we try to do a `gluster volume sync node2 all` we get a `volumesync node2 all : FAILED : Staging failed on node2. Please check log filefor details.`The logs all *seem* to be complaining the there are no volumes on node1- which makes sense (I think) because there *are* no volumes on node1.If we try to create a volume on node1 the system complains that thevolume already exists (on nodes 2& 3) - again, yes, this is correct.So, what are we doing wrong?Thanks in advanceDulux-Oz________Community Meeting Calendar:Schedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCGluster-users mailing list
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users