Placing different pools on different OSDs in the same physical servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

to avoid confusion I would name the "host" entries in the crush map
differently. Make sure these host names can be resolved to the correct
boxes though (/etc/hosts on all the nodes). You're also missing a new
rule entry (also shown in the link you mentioned).

Lastly, and this is *extremely* important: You need to set

[global]
osd crush update on start = false

in your ceph.conf because there is currently no logic for OSDs to detect
their location with different roots present as "documented" here:
http://tracker.ceph.com/issues/6227
If you don't set this, whenever you start an OSD belonging to your SSD
root, it will move the OSD over to the default root.

Side note: this is really unfortunate since with cache pools it is now
common to have platters and SSDs on the same physical hosts and also
multiple parallel roots.

On 10/07/2014 17:04, Nikola Pajtic wrote:
> Hello to all,
> 
> I was wondering is it possible to place different pools on different
> OSDs, but using only two physical servers?
> 
> I was thinking about this: http://tinypic.com/r/30tgt8l/8
> 
> I would like to use osd.0 and osd.1 for Cinder/RBD pool, and osd.2 and
> osd.3 for Nova instances. I was following the howto from ceph
> documentation:
> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
> , but it assumed that there are 4 physical servers: 2 for "Platter" pool
> and 2 for "SSD" pool.
> 
> What I was concerned about is how the CRUSH map should be written and
> how the CRUSH will decide where it will send the data? Because of the
> the same hostnames in cinder and nova pools. For example, is it possible
> to do something like this:
> 
> 
> # buckets
> host cephosd1 {
>         id -2           # do not change unnecessarily
>         # weight 0.010
>         alg straw
>         hash 0  # rjenkins1
>         item osd.0 weight 0.000
> }
> 
> host cephosd1 {
>         id -3           # do not change unnecessarily
>         # weight 0.010
>         alg straw
>         hash 0  # rjenkins1
>         item osd.2 weight 0.010
> }
> 
> host cephosd2 {
>         id -4           # do not change unnecessarily
>         # weight 0.010
>         alg straw
>         hash 0  # rjenkins1
>         item osd.1 weight 0.000
> }
> 
> host cephosd2 {
>         id -5           # do not change unnecessarily
>         # weight 0.010
>         alg straw
>         hash 0  # rjenkins1
>         item osd.3 weight 0.010
> }
> 
> root cinder {
>         id -1           # do not change unnecessarily
>         # weight 0.000
>         alg straw
>         hash 0  # rjenkins1
>         item cephosd1 weight 0.000
>         item cephosd2 weight 0.000
> }
> 
> root nova {
>         id -6           # do not change unnecessarily
>         # weight 0.020
>         alg straw
>         hash 0  # rjenkins1
>         item cephosd1 weight 0.010
>         item cephosd2 weight 0.010
> }
> 
> If not, could you share an idea how this scenario could be achieved?
> 
> Thanks in advance!!
> 
> 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux