Re: add-brick: failed: Commit failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As everything seems OK, you can check if your arbiter is ok.
Run 'gluster peer status' on all nodes.

If all peers report 2 peers connected ,you can run:
gluster volume add-brick gvol0 replica 2 arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0

Bewt Regards,
Strahil Nikolov

On May 20, 2019 02:31, David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:
Hello,

It does show everything as Connected and 0 for the existing bricks, gfs1 and gfs2. The new brick gfs3 isn't listed, presumably because of the failure as per my original email. Would anyone have any further suggestions on how to prevent the "Transport endpoint is not connected" error when adding the new brick?

# gluster volume heal gvol0 info summary
Brick gfs1:/nodirectwritedata/gluster/gvol0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick gfs2:/nodirectwritedata/gluster/gvol0
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0


# gluster volume status all
Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152     0          Y       7706
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152     0          Y       7624
Self-heal Daemon on localhost               N/A       N/A        Y       47636
Self-heal Daemon on gfs3                    N/A       N/A        Y       18542
Self-heal Daemon on gfs2                    N/A       N/A        Y       37192
 
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume task


On Sat, 18 May 2019 at 22:34, Strahil <hunter86_bg@yahoo.com> wrote:

Just run 'gluster volume heal my_volume info summary'.

It will report any issues - everything should be 'Connected' and show '0'.

Best Regards,
Strahil Nikolov

On May 18, 2019 02:01, David Cunningham <dcunningham@voisonics.com> wrote:
Hi Ravi,

The existing two nodes aren't in split-brain, at least that I'm aware of. Running "gluster volume status all" doesn't show any problem.

I'm not sure what "in metadata" means. Can you please explain that one?


On Fri, 17 May 2019 at 22:43, Ravishankar N <ravishankar@redhat.com> wrote:


On 17/05/19 5:59 AM, David Cunningham wrote:
Hello,

We're adding an arbiter node to an existing volume and having an issue. Can anyone help? The root cause error appears to be "00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)", as below.

Was your root directory of the replica 2 volume  in metadata or entry split-brain? If yes, you need to resolve it before proceeding with the add-brick.

-Ravi


We are running glusterfs 5.6.1. Thanks in advance for any assistance!

On existing node gfs1, trying to add new arbiter node gfs3:

# gluster volume add-brick gvol0 replica 3 arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0
volume add-brick: failed: Commit failed on gfs3. Please check log file for details.

On new node gfs3 in gvol0-add-brick-mount.log:

[2019-05-17 01:20:22.689721] I [fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2019-05-17 01:20:22.689778] I [fuse-bridge.c:4878:fuse_graph_sync] 0-fuse: switched to graph 0
[2019-05-17 01:20:22.694897] E [fuse-bridge.c:4336:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2019-05-17 01:20:22.699770] W [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)
[2019-05-17 01:20:22.699834] W [fuse-bridge.c:3294:fuse_setxattr_resume] 0-glusterfs-fuse: 2: SETXATTR 00000000-0000-0000-0000-000000000001/1 (trusted.add-brick) resolution failed
[2019-05-17 01:20:22.715656] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of /tmp/mntQAtu3f
[2019-05-17 01:20:22.715865] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5)


--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux