Thanks for your quick reply. This is the output of my remaining healty peer: getfattr -d -m. -e hex /brick/raidvolb/data/ getfattr: Removing leading '/' from absolute path names # file: brick/raidvolb/data/ trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0x8786357b9d114c01a34baee949c116e9 On Mon, Mar 30, 2015 at 12:38 PM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote: > > On 03/30/2015 03:59 PM, Ml Ml wrote: >> >> Anyone? >> >> Is this a dumb question or just a hard one? >> I already tried: >> >> http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server >> >> but i got stuck with the setfattr command. >> >> So i was wondering if this is the way to go? > > could you paste output of getfattr -d -m. -e hex > <any-of-the-other-bricks-in-replication>. > > Pranith >> >> >> >> On Thu, Mar 26, 2015 at 10:31 PM, Ml Ml <mliebherr99@xxxxxxxxxxxxxx> >> wrote: >>> >>> Hello List, >>> >>> i have a 3 Peer Replica Gluster. On one of my peers the hard drive of >>> a brick failed. >>> I replaced it and formated the brick device it with ext4. >>> >>> How do i get it back into my gluster? Is there a official way how to >>> re-integrade it? >>> >>> Thanks, >>> Mario >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@xxxxxxxxxxx >> http://www.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users