Re: Need urgent help for ceph health error issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 9 dec. 2021 kl 09:31 skrev Md. Hejbul Tawhid MUNNA
<munnaeebd@xxxxxxxxx>:
> Yes, min_size=1 and size=2 for ssd
>
> for hdd it is min_size=1 and size=3
>
> Could you please advice, about using hdd and ssd in a same ceph cluster. Is
> it okay for production grade openstack?

Mixing ssd and hdd in production is fine. Not sure if size=2 is "fine"
though. Also, having as many drives/hosts as the size parameter is
also not really a good combination with "production". Any surprise,
any maintenance on one ssd will make the ssd pools stop, and the
cluster can not recover until that one ssd is back again, since there
is no other ssd to recover onto.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux