Re: VM disks corruption on 3.7.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/05/2016 8:24 PM, Kevin Lemonnier wrote:
So the VM were configured with cache set to none, I just tried with
cache=directsync and it seems to be fixing the issue. Still need to run
more test, but did a couple already with that option and no I/O errors.

Never had to do this before, is it known ? Found the clue in some old mail
from this mailing list, did I miss some doc saying you should be using
directsync with glusterfs ?

Interesting, I remember seeing some issues with cache=none on the proxmox mailing list. I use writeback or default, which might be why I haven't encountered theses issue. I suspect you would find writethrough works as well.


From the proxmox wiki:


"This mode causes qemu-kvm to interact with the disk image file or block device with O_DIRECT semantics, so the host page cache is bypassed
     and I/O happens directly between the qemu-kvm userspace buffers and the          storage device. Because the actual storage device may report
     a write as completed when placed in its write queue only, the guest's virtual storage adapter is informed that there is a writeback cache,
     so the guest would be expected to send down flush commands as needed to manage data integrity.
     Equivalent to direct access to your hosts' disk, performance wise."


I'll restore a test vm and try cache=none myself.
-- 
Lindsay Mathieson
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux