Re: Missing OSD in SSD after disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you check what ceph-volume would do if you did it manually? Something like this

host1:~ # cephadm ceph-volume lvm batch --report /dev/vdc /dev/vdd --db-devices /dev/vdb

and don't forget the '--report' flag. One more question, did you properly wipe the previous LV on that NVMe? You should also have some logs available from the deployment attempt, maybe it reveals why the NVMe was not considered.


Zitat von Eric Fahnle <efahnle@xxxxxxxxxxx>:

Hi Eugen, thanks for the reply.

I've already tried what you wrote in your answer, but still no luck.

The NVMe disk still doesn't have the OSD. Please note I using containers, not standalone OSDs.

Any ideas?

Regards,
Eric

________________________________
Message: 2
Date: Fri, 20 Aug 2021 06:56:59 +0000
From: Eugen Block <eblock@xxxxxx>
Subject:  Re: Missing OSD in SSD after disk failure
To: ceph-users@xxxxxxx
Message-ID:
        <20210820065659.Horde.Azw9eV10u5ynqKwJpUyrg6_@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes

Hi,

this seems to be a reoccuring issue, I had the same just yesterday in
my lab environment running on 15.2.13. If I don't specify other
criteria in the yaml file then I'll end up with standalone OSDs
instead of the desired rocksDB on SSD. Maybe this is still a bug, I
didn't check. My workaround is this spec file:

---snip---
block_db_size: 4G
data_devices:
   size: "20G:"
   rotational: 1
db_devices:
   size: "10G"
   rotational: 0
filter_logic: AND
placement:
   hosts:
   - host4
   - host3
   - host1
   - host2
service_id: default
service_type: osd
---snip---

If you apply the new spec file, then destroy and zap the standalone
OSD I believe the orchestrator should redeploy it correctly, it did in
my case. But as I said, this is just a small lab environment.

Regards,
Eugen


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux