Creating Ceph Pools on different OSD's -- crushmap ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

I am trying to address the failure domain & performance/isolation of pools based on what OSD they can belong to. Let me give example. Can I achieve this with crurshmap ruleset or any other method, if so how?

 

Example:

10x storage servers each have 3x OSD ie OSD.0 through OSD29     -- Belong to Pool0 – This can be replicated pool or ecpool

 

Similarly,

 

10x storage servers each have 5x OSD ie OSD.30 through OSD79     -- Belong to Pool1 – This can be replicated pool or ecpool

 

 

Thanks for any info.

 

--

Deepak

 

 

 

 


This email message is for the sole use of the intended recipient(s) and may contain confidential information.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux