Issue enabling use-compound-fops with gfapi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

on a dev system I'm testing some options that are supposed to give
improved performance, I'm running ovirt with gfapi enabled with gluster
3.12.13 and when I set "cluster.use-compound-fops" to "on" every VMs are
paused due to a storage IO error while the file system continue to be
accessible through fuse client (only gfapi application stop working).

In the qemu log file I could see these gluster related messages:

2018-09-14T11:49:37.020942Z qemu-kvm: terminating on signal 15 from pid
1513 (/usr/sbin/libvirtd)
2018-09-14T11:49:42.766431Z qemu-kvm: Failed to flush the L2 table
cache: Input/output error
2018-09-14T11:49:44.766853Z qemu-kvm: Failed to flush the refcount block
cache: Input/output error
[2018-09-14 11:49:44.869112] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-1: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869284] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-0: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869515] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-2: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869639] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-3: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869823] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-4: All subvolumes are down. Going
offline until atleast one of them comes back up.
2018-09-14 11:49:45.827+0000: shutting down, reason=destroyed


If I set "cluster.use-compound-fops" to "off" everything start working
correctly again.

There is something else to configure or this is a bug?


Greetings,

    Paolo

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux