Re: Conflicting info on whether replicated bricks both online

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/18/2016 08:23 PM, Whit Blauvelt wrote:
On the one hand:

   # gluster volume heal foretee info healed
   Gathering list of healed entries on volume foretee has been unsuccessful on bricks that are down. Please check if all brick processes are running.

'info healed' and 'info heal-failed' are deprecated sub commands. That message is a bug; there's a patch (http://review.gluster.org/#/c/15724/) in progress to remove them from the CLI.
   root@bu-4t-a:/mnt/gluster# gluster volume status foretee
   Status of volume: foretee
   Gluster process                             TCP Port  RDMA Port  Online  Pid
   ------------------------------------------------------------------------------
   Brick bu-4t-a:/mnt/gluster                  49153     0          Y       9807
   Brick bu-4t-b:/mnt/gluster                  49152     0          Y       24638
   Self-heal Daemon on localhost               N/A       N/A        Y       2743
   Self-heal Daemon on bu-4t-b                 N/A       N/A        Y       12819
Task Status of Volume foretee
   ------------------------------------------------------------------------------
   There are no active volume tasks

On the other:

   # gluster volume heal foretee info healed
   Gathering list of healed entries on volume foretee has been unsuccessful on bricks that are down. Please check if all brick processes are running.

And:

   # gluster volume heal foretee info
This is the only command you need to run to monitor pending entries. As to why they are not getting healed, you would have to look at the glustershd.log on both nodes. Manually launch heal with `gluster volume heal <volname>` and see what the shd log spews out.

HTH,
Ravi
   ...
   <gfid:016ddfdf-f84d-4b94-8cb2-4aeced14f5dd>
   <gfid:00ec8c94-85ed-43bc-8484-4aba84470392>
   Status: Connected
   Number of entries: 3141

Both systems have their bricks in /mnt/gluster, and the volume then mounted
in /backups. I can write or delete a file in /backups on either system, and
it appears in both /backups on the other, and in /mnt/gluster on both.

So Gluster is working. There have only ever been the two bricks. But there
are 3141 entries that won't heal, and a suggestion that one of the bricks is
offline -- when they're both plainly there.

This is with glusterfs 3.8.5 on Ubuntu 16.04.1.


What's my next move?

Thanks,
Whit
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux