Re: same OSD in multiple CRUSH hierarchies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I don't think this is going to work. Each OSD belongs to a specific host and you can't have multiple buckets (e.g. bucket type "host") with the same name in the crush tree. But if I understand your requirement correctly, there should be no need to do it this way. If you structure your crush tree according to your separation requirements and the critical pools use designated rules, you can still have a rule that doesn't care about the data separation but distributes the replicas across the available hosts (given your failure domain would be "host"), which is already the default for the replicated_rule. Did I misunderstand something?

Regards,
Eugen


Zitat von Budai Laszlo <laszlo.budai@xxxxxxxxx>:

Hi there,

I'm curious if there is anything against configuring an ODS to be part in multiple CRUSH hierarchies. I'm thinking of the following scenario:

I want to create pools that are using distinct sets of OSDs. I want to make sure that a piece data which replicated at application level will not end up on the same OSD. So I would creat multiple CRUSH hierarchies (root - host - osd) but using different OSDs in each, and different rules that are using those hierarchies. Then I would create pools with the different rules, and use those different pools for storing the data for the different application instances. But I would also like to use the OSDs in the "default hierarchy" set up by ceph where all the hosts are in the same root bucket, and have the default replicated rule, so my generic data volumes would be able to spread across all the OSDs available.

Is there something against this setup?

Thank you for any advice!
Laszlo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux