glusterfs 3.6.5 and selfheal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am running glusterfs server with replicated volume for qemu-kvm (proxmox) VM storerage which is mounted using libgfapi module. The servers are running network with mtu 9000 and client is not (yet).
The question I've got is this:
Is it normal to see this kind of an output: gluster volume heal HA-100G-POC-PVE info

Brick stor1:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal

Number of entries: 1

Brick stor2:/exports/HA-100G-POC-PVE/100G/
/images/100/vm-100-disk-1.raw - Possibly undergoing heal

This happens pretty often but with different disk images on different replicated volumes. I mean I'm not sure if it is wrong or right, just curious of this.

--
Best regards,
Roman.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux