Re: Missing OSD in SSD after disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David! Very much appreciated your response.

I'm not sure that may be the problem. I tried with the following (without using "rotational"):

...(snip)...
data_devices:
   size: "15G:"
db_devices:
   size: ":15G"
filter_logic: AND
placement:
  label: "osdj2"
service_id: test_db_device
service_type: osd
...(snip)...

Without success. Also tried without the "filter_logic: AND" in the yaml file and the result was the same.

Best regards,
Eric


-----Original Message-----
From: David Orman [mailto:ormandj@xxxxxxxxxxxx] 
Sent: 27 August 2021 14:56
To: Eric Fahnle
Cc: ceph-users@xxxxxxx
Subject: Re:  Missing OSD in SSD after disk failure

This was a bug in some versions of ceph, which has been fixed:

https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083

You'll want to upgrade Ceph to resolve this behavior, or you can use size or something else to filter if that is not possible.

David

On Thu, Aug 19, 2021 at 9:12 AM Eric Fahnle <efahnle@xxxxxxxxxxx> wrote:
>
> Hi everyone!
> I've got a doubt, I tried searching for it in this list, but didn't find an answer.
>
> I've got 4 OSD servers. Each server has 4 HDDs and 1 NVMe SSD disk. The deployment was done with "ceph orch apply deploy-osd.yaml", in which the file "deploy-osd.yaml" contained the following:
> ---
> service_type: osd
> service_id: default_drive_group
> placement:
>   label: "osd"
> data_devices:
>   rotational: 1
> db_devices:
>   rotational: 0
>
> After the deployment, each HDD had an OSD and the NVMe shared the 4 OSDs, plus the DB.
>
> A few days ago, an HDD broke and got replaced. Ceph detected the new disk and created a new OSD for the HDD but didn't use the NVMe. Now the NVMe in that server has 3 OSDs running but didn't add the new one. I couldn't find out how to re-create the OSD with the exact configuration it had before. The only "way" I found was to delete all 4 OSDs and create everything from scratch (I didn't actually do it, as I hope there is a better way).
>
> Has anyone had this issue before? I'd be glad if someone pointed me in the right direction.
>
> Currently running:
> Version
> 15.2.8
> octopus (stable)
>
> Thank you in advance and best regards, Eric 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux