Re: Re: [Gluster-users] I/O fair share to avoid I/O bottlenecks on small clsuters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ran wrote:
The line that you need to add is the one with "writeback" in it.
If you are running qemu-kvm manually, you'll need to add the "cache=writeback" > to your list of -drive option parameters. All of this, of course, doesn't preclude applying ionice to the qemu container processes.

ionice has no affect on network mounts just local disks
so basicly its useless to ionice the kvm proccess wich takes its IO
from gluster rather then local disk .

Something just occurred to me. You are running separate volumes for your different services, right? A separate glfs volume for your mail spools / Maildirs, VM image storage, etc?

If that is the case, then you should be able to ionice the glfs processes on the server accordingly. If you are finding that the VM images are causing a lot of I/O load relative to other things, you could "ionice -c3" the glfs daemon running that particular volume.

Gordan




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux