Re: Rack Awareness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/1/15 Wido den Hollander <wido@xxxxxxxxx>:
> Yes, no problem at all! That's where the crushmap is for. This way you can
> tell Ceph exactly how to distribute your data.

Cool.
If I understood properly, I have to configure ceph with OSD in a
standard way and then
map these osd into a custom crushmap like this:

host node1 {
        id -1
        alg straw
        hash 0
        item osd.0 weight 1.00
        item osd.1 weight 1.00
}

host node2 {
        id -2
        alg straw
        hash 0
        item osd.2 weight 1.00
        item osd.3 weight 1.00
}

rack rack1 {
        id -3
        alg straw
        hash 0
        item node1 weight 2.00
}

rack rack2 {
        id -4
        alg straw
        hash 0
        item node2 weight 2.00
}

This one should allow me to have 4 OSDs, 2 servers and 2 racks. Each
rack with one server with 2 OSDs
Data will be automatically striped across both rack, right?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux