modifying data via fues causes heal problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi there

I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too) I see something like it:

# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs has been successful Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 0
Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 2
Brick 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 1
# end of QEMU-VMs:

which heals(automatically) later ok, but, why would this happen in the first place? Is this expected?

Clients(all peers) mount fuse with help of autofs, like this(eg, on 10.5.6.49 peer):

QEMU-VMs -fstype=glusterfs,acl 127.0.0.1,10.5.6.100,10.5.6.32:/QEMU-VMs

Is this some tuning, tweaking problems(latencies, etc)?
Is this autofs mount problem?
Or some other problems?

many thanks, L.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux