Questions regarding Crush Map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

On 02/09/2014 05:29, Jakes John wrote:> Hi,
>    I have some general questions regarding the crush map. It would be helpful if someone can help me out by clarifying them.
> 
> 1.  I saw that a bucket 'host' is always created for the crush maps which are automatically generated by ceph. If I am manually creating crushmap,  do I need to always add a bucket called ' host' ? As I was looking through the source code, I didn't see any need for this. If not necessary, can osd's of the same host be split into mulitple buckets? 
> 
> eg : Say host 1 has four osd's- osd.0,osd.1,osd.2, osd.3
>                                 host 2 has four osd's- osd.4,osd.5,osd.6,osd.7
> 
> and create two buckets - 
> 
> HostGroup bucket1- {osd.0, osd.1,osd.4,osd.5}
> HostGroup bucket2-{osd.2,osd.3,osd.6,osd.7} where HostGroup is new bucket type instead of the default 'host' type.
> 
> 
> Is this configuration possible or invalid? If this is possible, I can group SSD's of all hosts into 1 bucket and HDD's into other.

What you describe seem possible but I'm not sure what problem you are trying to solve. The crush map described at

http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds

is not what you want ? 

> 2. I have read in Ceph docs that same osd is not advised to be part of two buckets(two pools). 

A single OSD should be in a single bucket in the crush map indeed. But it is common for two OSD to be part of multiple pools. The pools are associated with a ruleset and each ruleset can choose in the same set of OSDs.

> Is there any reason for it? But,I couldn't find this limitation in the source code.

There is no limitation in the code but the crush function has been tested and used with a hierarchy where leaf nodes are not part of more than one bucket. 

Cheers
 
> eg:osd.0 is in bucket1 and bucket2. 
> 
> Is this configuration possible or invalid? If this is possible, I have the flexibility to have group data which are written to different pools.
> 
> 3. Is it possible to exclude or include a particular osd/host/rack in the crush mapping?. 
> 
> eg: I need to have third replica always in rack3 (a specified row/rack/host based on requirements) . First two can be chosen randomly
> 
> If possible, how can I configure it?
> 
> 
> 4. It is said that osd weights must be configured based on the storage. Say if I have SSD of 512 GB and HDD of 1 TB and if I configure .5 and 1 respectively, am I treating both SSD and HDD equally? How do I prioritize SSD over HDD?
> 
> 5. Continuing from 4), If i have mix of SSD's and HDD's in the  same host, what are the best ways possible to utilize the SSD capabilities in the ceph cluster? 
> 
> 
> Looking forward to your help,
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Lo?c Dachary, Artisan Logiciel Libre

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 263 bytes
Desc: OpenPGP digital signature
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140902/9d667cde/attachment.pgp>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux