Re: How to tell a VM to write more local ceph nodes than to the network.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler <roland@xxxxxxxxxxxxxx> wrote:
> I have a 4 node ceph cluster, but the disks are not equally distributed
> across all machines (they are substantially different from each other)
>
> One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3) and
> two machines have only two 1 TB drives each (s2 & s1).
>
> Now machine s3 has by far the most CPU's and RAM, so I'm running my VM's
> mostly from there, but I want to make sure that the writes that happen to
> the ceph cluster get written to the "local" osd's on s3 first and then the
> additional writes/copies get done to the network.
>
> Is this possible with ceph.  The VM's are KVM in Proxmox in case it's
> relevant.

In general you can't set up Ceph to write to the local node first. In
some specific cases you can if you're willing to do a lot more work
around placement, and this *might* be one of those cases.

To do this, you'd need to change the CRUSH rules pretty extensively,
so that instead of selecting OSDs at random, they have two steps:
1) starting from bucket s3, select a random OSD and put it at the
front of the OSD list for the PG.
2) Starting from a bucket which contains all the other OSDs, select
N-1 more at random (where N is the number of desired replicas).

You can look at the documentation on CRUSH or search the list archives
for more on this subject.

Note that doing this has a bunch of down sides: you'll have balance
issues because every piece of data will be on the s3 node (that's a
TERRIBLE name for a project which has API support for Amazon S3, btw
:p), if you add new VMs on a different node they'll all be going to
the s3 node for all their writes (unless you set them up on a
different pool with different CRUSH rules), s3 will be satisfying all
the read requests so the other nodes are just backups in case of disk
failure, etc.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux