Re: max number of pools per cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And if for any reason even single PG was damaged and for example stuck
inactive - then all RBDs will be affected.

First that come to mind is to create a separate pool for every RBD.

I think this is insane.
Is better to think how Kipod save data in CRUSH. Plan your failure domains and perform full stack monitoring (hots, power, network...).





k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux