Re: ceph octopus mysterious OSD crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/19/21 9:11 PM, Philip Brown wrote:
if we cant replace a drive on a node in a crash situation, without blowing away the entire node....
seems to me ceph octopus fails the "test" part of the "test cluster" :-/

I agree. This should not be necessary. And I'm sure there is, or there will be found a solution to fix this issue. If you think it's a bug please create an issue for it [1]. Do note however you do not need to use cephadm / containers. You can still install ceph through regular packages and configure by hand. Sure, the project is definitely moving into the containerized deploy direction, and I guess it might be more future proof to start a new cluster deployed with cephadm. Having said that, I do read in the cephadm documentation that the following should be possible:

data_devices:
  paths:
    - /dev/sdb
db_devices:
  paths:
    - /dev/sdc

What if you change your spec file to reflect the devices you want to use and try again?

Gr. Stefan

[1]: https://tracker.ceph.com/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux