Re: ceph octopus mysterious OSD crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am quite sure that this case is covered by cephadm already. A few months ago I tested it after a major rework of ceph-volume. I don’t have any links right now. But I had a lab environment with multiple OSDs per node with rocksDB on SSD and after wiping both HDD and DB LV cephadm automatically redeployed the OSD according to my drive group file.


Zitat von Stefan Kooman <stefan@xxxxxx>:

On 3/19/21 7:47 PM, Philip Brown wrote:

I see.


I dont think it works when 7/8 devices are already configured, and the SSD is already mostly sliced.

OK. If it is a test cluster you might just blow it all away. By doing this you are simulating a "SSD" failure taking down all HDDs with it. It sure isn't pretty. I would say the situation you ended up with is not a corner case by any means. I am afraid I would really need to set up a test cluster with cephadm to help you further at this point, besides the suggestion above.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux