On 27 January 2017 at 19:05, Kevin Lemonnier <lemonnierk@xxxxxxxxx> wrote:
> Basically, every now & then I notice random VHD images popping up in the
> heal queue, and they're almost always in pairs, "healing" the same file on
> 2 of the 3 replicate bricks.
> That already strikes me as odd, as if a file is "dirty" on more than one
> brick, surely that's a split-brain scenario? (nothing logged in "info
> split-brain" though)
I don't think that's a problem, they do tend to show the heal on every brick
but the one being healed .. I think the sources show the file to heal, not the
dirty one.
At least that's what I noticed on my clusters.
>
> Anyway, these heal processes always hang around for a couple of hours, even
> when it's just metadata on an arbiter brick.
> That doesn't make sense to me, an arbiter shouldn't take more than a couple
> of seconds to heal!?
Sorry, no idea on that, I never used arbiter setups.
If it's actually showing the source files that are being healed *from*, not *to*, that'd make sense. Although it's a counter-intuitive way of displaying things & is completely contrary to all of the documentation (as described by readthedocs.gluster.io, Red Hat & Rackspace)
>
> I spoke with Joe on IRC, and he suggested I'd find more info in the
> client's logs...
Well it'd be good to know why they need healing, for sure.
I don't know of any way to get that on the gluster side, you need to
find a way on oVirt to redirect the output of the qemu process somewhere.
That's where you'll find the libgfapi logs.
Never used oVirt so I can't really help on that :/
Well you've given me somewhere to start from at least.
Appreciated!
D
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users