Re: Re: [Gluster-users] I/O fair share to avoid I/O bottlenecks on small clsuters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> The line that you need to add is the one with "writeback" in it.
> If you are running qemu-kvm manually, you'll need to add the "cache=writeback" > to your list of -drive option parameters.
> All of this, of course, doesn't preclude applying ionice to the qemu container processes.

ionice has no affect on network mounts just local disks
so basicly its useless to ionice the kvm proccess wich takes its IO
from gluster rather then local disk .

I agree with gorden , the sulotion in this particular cases need to be
IO improvments on the OS(gluster servers) level and on gluster
application level .

The setup is as follow :

2 gluster servers wich DRBD to 1 another so basicly there is only 1
active storage server with 4 disks --> 2 for os wich is md raid 1 and
2 for storage wich is also md raid 1 but over drbd replication as well
.
I know its not optimal but very solid in terms of managment an clear
mind regarding failover of 1 server node .
The idea was to build gluster storage in such way that every 2 servers
is 1 storage server of 2TB (1U servers)
Then add 1U pairs if needed .

The applicatons that uses this storage are :
1) mail storage - nfs over gluster (about 3000 mail accounts)
2) statistic logs of web servers - samba over gluster(some NT servers uses this)
3) kvm images - for now im testing with only 1 KVM host and 3 virtual
win2k servers)
what happen is that if say 1 KVM virtual win2k (not host) run for
example high IO test
with stress IO tool(inside the win2k virt) , the entire gluster
storage crowl to not functioning
including mail etc... the gluster server load is at 3 to 4
the storage barily function , this is with only 1 virtual machine(win2k kvm)
so im just wondering what will happen with say 10 virtual machines
nothing will work .

I agree that the md raid is affecting the all thing but i didnt think
it will be crucial .

Thanks .





2010/2/1, Jeff Darcy <jdarcy@xxxxxxxxxx>:
> On 01/31/2010 09:06 AM, Ran wrote:
>> You guys are talking about network IO im taking about the gluster server
>> disk IO
>> the idea to shape the trafic does make sence seens the virt machines
>> server do use network to get to the disks(gluster)
>> but what about if there are say 5 KVM servers(with VPS's) all on
>> gluster what do you do then ? its not quite fair share seens every
>> server has its own fair share and doesnt see the others .
>>
>> Also there are other applications that uses gluster like mail etc..
>> and i see that gluster IO is very high very often cousing the all
>> storage not to work .
>> Its very disturbing .
>
>
> You bring up a good set of points.  Some of these problems can be
> addressed at the hypervisor (i.e. GlusterFS client) level, some can be
> addressed by GlusterFS itself, and some can be addressed only at the
> level of the local-filesystem or block-device level on the GlusterFS
> servers.  Unfortunately, I/O traffic shaping is still in its infancy
> compared to what's available for networking - or perhaps even "infancy"
> is too generous.  As far as the I/O stack is concerned, all of the
> traffic is coming from the glusterfsd process(es) without
> differentiation, so even if the functionality to apportion I/O amongst
> tasks existed it wouldn't be usable without more information.  Maybe
> some day...
>
> What you can do now at the GlusterFS level, though, is make sure that
> traffic is distributed across many servers and possibly across many
> volumes per server to take advantage of multiple physical disks and/or
> interconnects for one server.  That way, a single VM will only use a
> small subset of the servers/volumes and will not starve other clients
> that are using different servers/volumes (except for network bottlenecks
> which are a separate issue).  That's what the "distribute" translator is
> for, and it can be combined with replicate or stripe to provide those
> functions as well.  Perhaps it would be useful to create and publish
> some up-to-date recipes for these sorts of combinations.
>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux