Re: Scheduling based on network speed / mixing compute and storage nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 14 May 2008, Jordan Mendler wrote:

So coming back to network-speed scheduling, would there be a way to have
each node prefer writing to it's locally hosted gluster brick

AFR, in theory, happens all at once, as far as I understand. For reads, you can set an option for a preferred read subvolume.

to then be
AFR replicated to its close-by nodes that are on the same switch?

You can regulate this by making sure that all the mirrors for a particular brick are on the same switch, and stripe/unify the AFRs together. The problem you'll have with this, however, is that if you lose a switch, that entire set of data will become inaccessible because all the mirrored copies will go with it.

In practice, you'll find that you have more reads than writes, so writes hitting all switches is less of a problem than reads hitting all switches. Since there are more reads, you probably want a complete copy of the data on each switch, so unify/stripe data across the nodes on a single switch and AFR these together. That way you can contain most of the reads within a switch.

Also, has
anyone attempted this kind of combined setup of Gluster across
compute-nodes?

It sounds like a pretty strandard (stripe | unify) + AFR setup.

Gordan




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux