I'm on 3.10.5. Its rock solid (at least with the fuse mount
<Grin>) We are also typically on a somewhat slower GlusterFS LAN network
(bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is
fast enough, dirt simple to use, and just works on all VM ops such
as migration, snaps etc, so there hasn't been a compelling need to
squeeze out a few more I/Os.
On 9/9/2017 3:08 PM,
lemonnierk@xxxxxxxxx wrote:
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote:Well, that makes me feel better. I've seen all these stories here and on Ovirt recently about VMs going read-only, even on fairly simply layouts. Each time, I've responded that we just don't see those issues. I guess the fact that we were lazy about switching to gfapi turns out to be a potential explanation <grin> -wk On 9/9/2017 6:49 AM, Pavel Szalbot wrote:Yes, this is my observation so far. On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti@xxxxxxxxxx <mailto:g.danti@xxxxxxxxxx>> wrote: So, to recap: - with gfapi, your VMs crashes/mount read-only with a single node failure; - with gpapi also, fio seems to have no problems; - with native FUSE client, both VMs and fio have no problems at all._______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users