Re: slave is rebalancing, master is not?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 5 June 2015 at 20:46, Dr. Michael J. Chudobiak <mjc@xxxxxxxxxxxxxxx> wrote:
I seem to have an issue with my replicated setup.

The master says no rebalancing is happening, but the slave says there is (sort of). The master notes the issue:

[2015-06-05 15:11:26.735361] E [glusterd-utils.c:9993:glusterd_volume_status_aggregate_tasks_status] 0-management: Local tasks count (0) and remote tasks count (1) do not match. Not aggregating tasks status.

The slave shows some odd messages like this:
[2015-06-05 14:44:56.525402] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server

I want the supposed rebalancing to stop, so I can add bricks.

Any idea what is going on, and how to fix it?

Both servers were recently upgraded from Fedora 21 to 22.

Status output is below.

- Mike



Master: [root@karsh ~]# /usr/sbin/gluster volume status
Status of volume: volume1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick karsh:/gluster/brick1/data                        49152   Y       4023
Brick xena:/gluster/brick2/data                         49152   Y       1719
Brick karsh:/gluster/brick3/data                        49153   Y       4015
Brick xena:/gluster/brick4/data                         49153   Y       1725
NFS Server on localhost                                 2049    Y       4022
Self-heal Daemon on localhost                           N/A     Y       4034
NFS Server on xena                                      2049    Y       24550
Self-heal Daemon on xena                                N/A     Y       24557

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks


[root@xena glusterfs]# /usr/sbin/gluster volume status
Status of volume: volume1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick karsh:/gluster/brick1/data                        49152   Y       4023
Brick xena:/gluster/brick2/data                         49152   Y       1719
Brick karsh:/gluster/brick3/data                        49153   Y       4015
Brick xena:/gluster/brick4/data                         49153   Y       1725
NFS Server on localhost                                 2049    Y       24550
Self-heal Daemon on localhost                           N/A     Y       24557
NFS Server on 192.168.0.240                             2049    Y       4022
Self-heal Daemon on 192.168.0.240                       N/A     Y       4034

Task Status of Volume volume1
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : f550b485-26c4-49f8-b7dc-055c678afce8
Status               : in progress

[root@xena glusterfs]# gluster volume rebalance volume1 status
volume rebalance: volume1: success:

This is weird. Did you start rebalance yourself? What does "gluster volume rebalance volume1 status" say? Also check if both the nodes are properly connected using "gluster peer status".

If it says completed/stopped, you can go ahead and add the bricks. Also can you check if rebalance process is running in your second server (xena?)

BTW, there is *no* master and slave in a single gluster volume :)

Best Regards,
Vishwanath




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux