Questions regarding Crush Map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
   I have some general questions regarding the crush map. It would be
helpful if someone can help me out by clarifying them.

1.  I saw that a bucket 'host' is always created for the crush maps which
are automatically generated by ceph. If I am manually creating crushmap,
 do I need to always add a bucket called ' host' ? As I was looking through
the source code, I didn't see any need for this. If not necessary, can
osd's of the same host be split into mulitple buckets?

eg : Say host 1 has four osd's- osd.0,osd.1,osd.2, osd.3
                                host 2 has four osd's-
osd.4,osd.5,osd.6,osd.7

and create two buckets -

HostGroup bucket1- {osd.0, osd.1,osd.4,osd.5}
HostGroup bucket2-{osd.2,osd.3,osd.6,osd.7} where HostGroup is new bucket
type instead of the default 'host' type.


Is this configuration possible or invalid? If this is possible, I can group
SSD's of all hosts into 1 bucket and HDD's into other.

2. I have read in Ceph docs that same osd is not advised to be part of two
buckets(two pools). Is there any reason for it? But,I couldn't find this
limitation in the source code.


eg:osd.0 is in bucket1 and bucket2.

Is this configuration possible or invalid? If this is possible, I have the
flexibility to have group data which are written to different pools.

3. Is it possible to exclude or include a particular osd/host/rack in the
crush mapping?.

eg: I need to have third replica always in rack3 (a specified row/rack/host
based on requirements) . First two can be chosen randomly

If possible, how can I configure it?


4. It is said that osd weights must be configured based on the storage. Say
if I have SSD of 512 GB and HDD of 1 TB and if I configure .5 and 1
respectively, am I treating both SSD and HDD equally? How do I prioritize
SSD over HDD?

5. Continuing from 4), If i have mix of SSD's and HDD's in the  same host,
what are the best ways possible to utilize the SSD capabilities in the ceph
cluster?


Looking forward to your help,

Thanks,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140901/467ffbf4/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux