Re: osdspec_affinity error in the Cephadm module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In the end, forcing a restart of cephadm at least cleared the "error"
status, and nodes were not wrongly shown as offline (another symptom of the
cephadm state).
One pending OSD creation did proceed.

It still keeps trying to apply the rule and logs it as a warning now rather
than crashing.

On Wed, 16 Aug 2023 at 18:03, Adam King <adking@xxxxxxxxxx> wrote:

> it looks like you've hit https://tracker.ceph.com/issues/58946 which has
> a candidate fix open, but nothing merged. The description on the PR with
> the candidate fix says "When osdspec_affinity is not set, the drive
> selection code will fail. This can happen when a device has multiple LVs
> where some of are used by Ceph and at least one LV isn't used by Ceph." so
> maybe you can start there in terms of finding a potential workaround for
> now.
>
> On Wed, Aug 16, 2023 at 12:05 PM Adam Huffman <
> adam.huffman.lists@xxxxxxxxx> wrote:
>
>> I've been having fun today trying to invite a new disk that replaced a
>> failing one into a cluster.
>>
>> One of my attempts to apply an OSD spec was clearly wrong, because I now
>> have this error:
>>
>> Module 'cephadm' has failed: 'osdspec_affinity'
>>
>> and this was the traceback in the mgr logs:
>>
>>  Traceback (most recent call last):
>>    File "/usr/share/ceph/mgr/cephadm/utils.py", line 77, in do_work
>>      return f(*arg)
>>    File "/usr/share/ceph/mgr/cephadm/serve.py", line 224, in refresh
>>      r = self._refresh_host_devices(host)
>>    File "/usr/share/ceph/mgr/cephadm/serve.py", line 396, in
>> _refresh_host_devices
>>      self.update_osdspec_previews(host)
>>    File "/usr/share/ceph/mgr/cephadm/serve.py", line 412, in
>> update_osdspec_previews
>>      previews.extend(self.mgr.osd_service.get_previews(search_host))
>>    File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 258, in
>> get_previews
>>      return self.generate_previews(osdspecs, host)
>>    File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 291, in
>> generate_previews
>>      for host, ds in self.prepare_drivegroup(osdspec):
>>    File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 225, in
>> prepare_drivegroup
>>      existing_daemons=len(dd_for_spec_and_host))
>>    File
>>
>> "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py",
>> line 35, in __init__
>>      self._data = self.assign_devices('data_devices',
>> self.spec.data_devices)
>>    File
>>
>> "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py",
>> line 19, in wrapper
>>      return f(self, ds)
>>    File
>>
>> "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py",
>> line 134, in assign_devices
>>      if lv['osdspec_affinity'] != self.spec.service_id:
>>  KeyError: 'osdspec_affinity'
>>
>> This cluster is running 16.2.13.
>>
>> The exported service spec is:
>>
>> service_type: osd
>> service_id: osd_spec-0.3
>> service_name: osd.osd_spec-0.3
>> placement:
>>   host_pattern: cepho-*
>> spec:
>>   data_devices:
>>     rotational: true
>>   db_devices:
>>     model: SSDPE2KE032T8L
>>   encrypted: true
>>   filter_logic: AND
>>   objectstore: bluestore
>>
>> Best Wishes,
>> Adam
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux