Re: Continual heals happening on sharded cluster, me too...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 12/05/2016 22:08, Nicolas Ecarnot a écrit :
I read and tried to use informations here :
https://www.mail-archive.com/gluster-users%40gluster.org/msg24598.html
but I'm missing knowledge to know how to fix things.

Here what I see when running "gluster volume heal data-shard-03 info" :

[...]

* serv-vm-al02

gluster volume heal data-shard-03 info
Brick serv-vm-al01:/gluster/data/brick03
<gfid:75ab0b1c-258f-4349-85bd-49ef80592919>
<gfid:39bdcf05-f5b1-4df2-9941-614838b50e18>
<gfid:7e0d84a0-0749-4a2d-9390-e95d489ec66a>
Status: Connected
Number of entries: 3

Brick serv-vm-al02:/gluster/data/brick03
/.shard/41573624-feb9-4ea6-bbd4-f0a912429b2f.1003
/.shard/26ec536a-8919-478c-834c-f6ac70882ee6.2351
/.shard/26ec536a-8919-478c-834c-f6ac70882ee6.1113
Status: Connected
Number of entries: 3

Hi,

Just to be sure :
- is the situation above harmful or risky?
- if yes, what should I do to fix it?

Following the URL above, I identified the VM-file involved.
What action do I have to make to cope with this issue?

Thank you.

--
Nicolas ECARNOT
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux