Re: Location of the gluster client log with libgfapi?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Basically, every now & then I notice random VHD images popping up in the
> heal queue, and they're almost always in pairs, "healing" the same file on
> 2 of the 3 replicate bricks.
> That already strikes me as odd, as if a file is "dirty" on more than one
> brick, surely that's a split-brain scenario? (nothing logged in "info
> split-brain" though)

I don't think that's a problem, they do tend to show the heal on every brick
but the one being healed .. I think the sources show the file to heal, not the
dirty one.
At least that's what I noticed on my clusters.

> 
> Anyway, these heal processes always hang around for a couple of hours, even
> when it's just metadata on an arbiter brick.
> That doesn't make sense to me, an arbiter shouldn't take more than a couple
> of seconds to heal!?

Sorry, no idea on that, I never used arbiter setups.

> 
> I spoke with Joe on IRC, and he suggested I'd find more info in the
> client's logs...

Well it'd be good to know why they need healing, for sure.
I don't know of any way to get that on the gluster side, you need to
find a way on oVirt to redirect the output of the qemu process somewhere.
That's where you'll find the libgfapi logs.
Never used oVirt so I can't really help on that :/

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux