Re: adding OSD to orchestrated system, ignoring osd service spec.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just wanted to see if something like "all available devices" is managed and could possibly override your drivegroups.yml. Here's an example:

storage01:~ # ceph orch ls osd
NAME                       PORTS  RUNNING  REFRESHED  AGE  PLACEMENT
osd                                     3  9m ago     -    <unmanaged>
osd.all-available-devices               0  -          8M   *

You can also find more information in the cephadm.log and/or in the ceph-volume.log.

Zitat von Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>:

Not really, its on an airgapped/secure network and I cannot copy-and-paste from it. What are you looking for? This cluster has 720 OSDs across 18 storage nodes. I think we have identified the problem and it may not be a ceph issue, but need to investigate further. It has something to do with the SSD devices that are being ignored - they are slightly different from the other ones.
________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Wednesday, January 11, 2023 3:27 AM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re: adding OSD to orchestrated system, ignoring osd service spec.

Hi,

can you share the output of

storage01:~ # ceph orch ls osd

Thanks,
Eugen

Zitat von Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>:

When adding a new OSD to a ceph orchestrated system (16.2.9) on a
storage node that has a specification profile that dictates which
devices to use as the db_devices (SSDs), the newly added OSDs seem
to be ignoring the db_devices (there are several available) and
putting the data and db/wal on the same device.

We installed the new disk (HDD) and then ran "ceph orch device zap
/dev/xyz --force" to initialize the addition process.
The OSDs that were added originally on that node were layed out
correctly, but the new ones seem to be ignoring the OSD service spec.

How can we make sure the new devices added are layed out correctly?

thanks,
Wyllys


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux