same OSD in multiple CRUSH hierarchies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

I'm curious if there is anything against configuring an ODS to be part in multiple CRUSH hierarchies. I'm thinking of the following scenario:

I want to create pools that are using distinct sets of OSDs. I want to make sure that a piece data which replicated at application level will not end up on the same OSD. So I would creat multiple CRUSH hierarchies (root - host - osd) but using different OSDs in each, and different rules that are using those hierarchies. Then I would create pools with the different rules, and use those different pools for storing the data for the different application instances. But I would also like to use the OSDs in the "default hierarchy" set up by ceph where all the hosts are in the same root bucket, and have the default replicated rule, so my generic data volumes would be able to spread across all the OSDs available.

Is there something against this setup?

Thank you for any advice!
Laszlo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux