Re: PG is stuck in repmapped and degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Hi,all
>  we use openstack + ceph(hammer) in my production

Hammer is soooooo 2015.

> There are 22 osds on a host and 11 osds share one ssd for osd journal.

I can’t imagine a scenario in which this strategy makes sense, the documentation and books are quite clear on why this is a bad idea.  Assuming that your OSDs are HDD and the journal devices are SATA SSD, the journals are going to be a bottleneck, and you’re going to wear through them quickly.  If you have a read-mostly workload, colocating them would be safer.

I also suspect that something is amiss with your CRUSH topology that is preventing recovery, and/or you actually have multiple overlapping failures.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux