Re: Missing OSD in SSD after disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

this seems to be a reoccuring issue, I had the same just yesterday in my lab environment running on 15.2.13. If I don't specify other criteria in the yaml file then I'll end up with standalone OSDs instead of the desired rocksDB on SSD. Maybe this is still a bug, I didn't check. My workaround is this spec file:

---snip---
block_db_size: 4G
data_devices:
  size: "20G:"
  rotational: 1
db_devices:
  size: "10G"
  rotational: 0
filter_logic: AND
placement:
  hosts:
  - host4
  - host3
  - host1
  - host2
service_id: default
service_type: osd
---snip---

If you apply the new spec file, then destroy and zap the standalone OSD I believe the orchestrator should redeploy it correctly, it did in my case. But as I said, this is just a small lab environment.

Regards,
Eugen


Zitat von Eric Fahnle <efahnle@xxxxxxxxxxx>:

Hi everyone!
I've got a doubt, I tried searching for it in this list, but didn't find an answer.

I've got 4 OSD servers. Each server has 4 HDDs and 1 NVMe SSD disk. The deployment was done with "ceph orch apply deploy-osd.yaml", in which the file "deploy-osd.yaml" contained the following:
---
service_type: osd
service_id: default_drive_group
placement:
  label: "osd"
data_devices:
  rotational: 1
db_devices:
  rotational: 0

After the deployment, each HDD had an OSD and the NVMe shared the 4 OSDs, plus the DB.

A few days ago, an HDD broke and got replaced. Ceph detected the new disk and created a new OSD for the HDD but didn't use the NVMe. Now the NVMe in that server has 3 OSDs running but didn't add the new one. I couldn't find out how to re-create the OSD with the exact configuration it had before. The only "way" I found was to delete all 4 OSDs and create everything from scratch (I didn't actually do it, as I hope there is a better way).

Has anyone had this issue before? I'd be glad if someone pointed me in the right direction.

Currently running:
Version
15.2.8
octopus (stable)

Thank you in advance and best regards,
Eric
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux