Re: lingering <gfid:*> entries in volume heal, gluster 3.6.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/15/2016 09:32 PM, Kingsley wrote:
On Fri, 2016-07-15 at 21:06 +0530, Ravishankar N wrote:
On 07/15/2016 08:48 PM, Kingsley wrote:
I don't have star installed so I used ls,
Oops typo. I meant `stat`.
   but yes they all have 2 links
to them (see below).

Everything seems to be in place for the heal to happen. Can you tailf
the output of shd logs on all nodes and manually launch gluster vol heal
volname?
Use DEBUG log level if you have to and examine the output for clues.
I presume I can do that with this command:

gluster volume set callrec diagnostics.brick-log-level DEBUG
shd is a client process, so it is diagnostics.client-log-level. This would affect your mounts too.

How can I find out what the log level is at the moment, so that I can
put it back afterwards?
INFO. you can also use `gluster volume reset`.


Also, some dumb things to check: are all the bricks really up and is the
shd connected to them etc.
All bricks are definitely up. I just created a file on a client and it
appeared in all 4 bricks.

I don't know how to tell whether the shd is connected to all of them,
though.
Latest messages like "connected to client-xxx " and "disconnected from client-xxx" in the shd logs. Just like in the mount logs.
Cheers,
Kingsley.


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux