Pool on limited number of OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

I have a cluster, (Nautilus 14.2.4) where one pool I'd like to keep on a
dedicated OSDs. So I setup a rule that covers *3* dedicated OSDs (using
device classes) and assigned it to pool with replication factor *3*. Only
10% PGs were assigned and rebalanced, where rest of them stuck in
*undersized* state.

What mechanism prevents CRUSH algorithm to assign the same set of OSDs to
all PGs in a pool? How can I control it?

Jacek
-- 
Jacek Suchenia
jacek.suchenia@xxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux