Re: [Gluster-users] No healing on peer disconnect - is it correct?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There will be pending heals only when the brick process goes down or there is a disconnect between the client and that brick. When you say " gluster process is down but bricks running", I'm guessing you killed only glusterd and not the glusterfsd brick process. That won't cause any pending heals. If there is something to be healed, `gluster volume heal $volname info` will display the list of files.

Hope that helps,
Ravi
On 10/06/19 7:53 PM, Martin wrote:
My VMs using Gluster as storage through libgfapi support in Qemu. But I dont see any healing of reconnected brick.

Thanks Karthik / Ravishankar in advance!

On 10 Jun 2019, at 16:07, Hari Gowtham <hgowtham@xxxxxxxxxx> wrote:

On Mon, Jun 10, 2019 at 7:21 PM snowmailer <snowmailer@xxxxxxxxx> wrote:

Can someone advice on this, please?

BR!

Dňa 3. 6. 2019 o 18:58 užívateľ Martin <snowmailer@xxxxxxxxx> napísal:

Hi all,

I need someone to explain if my gluster behaviour is correct. I am not sure if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 1 x 3 = 3.

When one of my hypervisor is disconnected as peer, i.e. gluster process is down but bricks running, other two healthy nodes start signalling that they lost one peer. This is correct.
Next, I restart gluster process on node where gluster process failed and I thought It should trigger healing of files on failed node but nothing is happening.

I run VMs disks on this gluster volume. No healing is triggered after gluster restart, remaining two nodes get peer back after restart of gluster and everything is running without down time.
Even VMs that are running on “failed” node where gluster process was down (bricks were up) are running without down time.

I assume your VMs use gluster as the storage. In that case, the
gluster volume might be mounted on all the hypervisors.
The mount/ client is smart enough to give the correct data from the
other two machines which were always up.
This is the reason things are working fine.

Gluster should heal the brick.
Adding people how can help you better with the heal part.
@Karthik Subrahmanya  @Ravishankar N do take a look and answer this part.


Is this behaviour correct? I mean No healing is triggered after peer is reconnected back and VMs.

Thanks for explanation.

BR!
Martin


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux