Replace block drives of combined NVME+HDD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Unfortunately, some of our HDDs failed and we need to replace these drives
which are parts of "combined" OSDs (DB/WAL on NVME, block storage on HDD).
All OSDs are defined with a service definition similar to this one:

```
service_type: osd
service_id: ceph02_combined_osd
service_name: osd.ceph02_combined_osd
placement:
  hosts:
  - ceph02
spec:
  data_devices:
    paths:
    - /dev/sda
    - /dev/sdb
    - /dev/sdc
    - /dev/sdd
    - /dev/sde
    - /dev/sdf
    - /dev/sdg
    - /dev/sdh
    - /dev/sdi
  db_devices:
    paths:
    - /dev/nvme0n1
    - /dev/nvme1n1
  filter_logic: AND
  objectstore: bluestore
```

In the above example, HDDs `sda` and `sdb` are not readable and data cannot
be copied over to new HDDs. NVME partitions of `nvme0n1` with DB/WAL data
are intact, but I guess that data is useless. I think the best approach is
to replace the dead drives and completely rebuild each affected OSD. How
should we go about this, preferably in a way that other OSDs on the node
remain unaffected and operational?

I would appreciate any advice or pointers to the relevant documentation.

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux