Re: db_devices doesn't show up in exported osd service spec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I ran into the same situation when I created my first Octopus cluster. After purging everything, I started over and included a “model” instead of “rotational: 0” for data_devices in the spec, and this time it worked fine (it appears both in the output of `orch apply` and `orch ls —export`, as well as in the “devices” of `ceph osd metadata ID` output). Try that instead of “size", maybe? (It helped that all SSDs are the same type in this cluster.)

Note: as part of purging my previous attempt, I made sure all traces of LVM were gone from those drives, i.e., lvremove/vgdelete/pgremove.

Davor

> On Feb 9, 2021, at 10:05 PM, Tony Liu <tonyliu0592@xxxxxxxxxxx> wrote:
> 
> With db_devices.size, db_devices shows up from "orch ls --export",
> but no DB device/lvm created for the OSD. Any clues?
> 
> Thanks!
> Tony
> ________________________________________
> From: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx>
> Sent: February 9, 2021 01:16 AM
> To: ceph-users@xxxxxxx
> Subject:  Re: db_devices doesn't show up in exported osd service spec
> 
> Hi Tony.
> 
> I assume they used a size constraint instead of rotational. So if all your SSD's are 1TB or less , and all HDD's are more than that you could use:
> 
> spec:
>  objectstore: bluestore
>  data_devices:
>    rotational: true
>  filter_logic: AND
>  db_devices:
>    size: ':1TB'
> 
> It was usable in my test environment, and seems to work.
> 
> Regards
> 
> Jens
> 
> 
> -----Original Message-----
> From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> Sent: 9. februar 2021 02:09
> To: David Orman <ormandj@xxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Subject:  Re: db_devices doesn't show up in exported osd service spec
> 
> Hi David,
> 
> Could you show me an example of OSD service spec YAML to workaround it by specifying size?
> 
> Thanks!
> Tony
> ________________________________________
> From: David Orman <ormandj@xxxxxxxxxxxx>
> Sent: February 8, 2021 04:06 PM
> To: Tony Liu
> Cc: ceph-users@xxxxxxx
> Subject: Re:  Re: db_devices doesn't show up in exported osd service spec
> 
> Adding ceph-users:
> 
> We ran into this same issue, and we used a size specification to workaround for now.
> 
> Bug and patch:
> 
> https://tracker.ceph.com/issues/49014
> https://github.com/ceph/ceph/pull/39083
> 
> Backport to Octopus:
> 
> https://github.com/ceph/ceph/pull/39171
> 
> On Sat, Feb 6, 2021 at 7:05 PM Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>> wrote:
> Add dev to comment.
> 
> With 15.2.8, when apply OSD service spec, db_devices is gone.
> Here is the service spec file.
> ==========================================
> service_type: osd
> service_id: osd-spec
> placement:
>  hosts:
>  - ceph-osd-1
> spec:
>  objectstore: bluestore
>  data_devices:
>    rotational: 1
>  db_devices:
>    rotational: 0
> ==========================================
> 
> Here is the logging from mon. The message with "Tony" is added by me in mgr to confirm. The audit from mon shows db_devices is gone.
> Is there anything in mon to filter that out based on host info?
> How can I trace it?
> ==========================================
> audit 2021-02-07T00:45:38.106171+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4020 : audit [DBG] from='client.24184218 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "target": ["mon-mgr", ""]}]: dispatch cephadm 2021-02-07T00:45:38.108546+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4021 : cephadm [INF] Marking host: ceph-osd-1 for OSDSpec preview refresh.
> cephadm 2021-02-07T00:45:38.108798+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with placement ceph-osd-1 cephadm 2021-02-07T00:45:38.108893+0000 mgr.ceph-control-1.nxjnzz (mgr.24142551) 4023 : cephadm [INF] Tony: spec: <bound method ServiceSpec.to_json of DriveGroupSpec(name=osd-spec->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1', network='', name='')]), service_id='osd-spec', service_type='osd', data_devices=DeviceSelection(rotational=1, all=False), db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False)> audit 2021-02-07T00:45:38.109782+0000 mon.ceph-control-3 (mon.2) 25 : audit [INF] from='mgr.24142551 10.6.50.30:0/2838166251<http://10.6.50.30:0/2838166251>' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"plac
> ement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.110133+0000 mon.ceph-control-1 (mon.0) 107 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]: dispatch audit 2021-02-07T00:45:38.152756+0000 mon.ceph-control-1 (mon.0) 108 : audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.osd.osd-
> spec","val":"{\"created\": \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\": [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\": \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\": {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\": \"bluestore\"}}}"}]': finished ==========================================
> 
> Thanks!
> Tony
>> -----Original Message-----
>> From: Jens Hyllegaard (Soft Design A/S)
>> <jens.hyllegaard@xxxxxxxxxxxxx<mailto:jens.hyllegaard@xxxxxxxxxxxxx>>
>> Sent: Thursday, February 4, 2021 6:31 AM
>> To: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
>> Subject:  Re: db_devices doesn't show up in exported osd
>> service spec
>> 
>> Hi.
>> 
>> I have the same situation. Running 15.2.8 I created a specification
>> that looked just like it. With rotational in the data and
>> non-rotational in the db.
>> 
>> First use applied fine. Afterwards it only uses the hdd, and not the ssd.
>> Also, is there a way to remove an unused osd service.
>> I manages to create osd.all-available-devices, when I tried to stop
>> the autocreation of OSD's. Using ceph orch apply osd
>> --all-available-devices --unmanaged=true
>> 
>> I created the original OSD using the web interface.
>> 
>> Regards
>> 
>> Jens
>> -----Original Message-----
>> From: Eugen Block <eblock@xxxxxx<mailto:eblock@xxxxxx>>
>> Sent: 3. februar 2021 11:40
>> To: Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>
>> Cc: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
>> Subject:  Re: db_devices doesn't show up in exported osd
>> service spec
>> 
>> How do you manage the db_sizes of your SSDs? Is that managed
>> automatically by ceph-volume? You could try to add another config and
>> see what it does, maybe try to add block_db_size?
>> 
>> 
>> Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>:
>> 
>>> All mon, mgr, crash and osd are upgraded to 15.2.8. It actually
>>> fixed another issue (no device listed after adding host).
>>> But this issue remains.
>>> ```
>>> # cat osd-spec.yaml
>>> service_type: osd
>>> service_id: osd-spec
>>> placement:
>>>  host_pattern: ceph-osd-[1-3]
>>> data_devices:
>>>  rotational: 1
>>> db_devices:
>>>  rotational: 0
>>> 
>>> # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec
>>> update...
>>> 
>>> # ceph orch ls --service_name osd.osd-spec --export
>>> service_type: osd
>>> service_id: osd-spec
>>> service_name: osd.osd-spec
>>> placement:
>>>  host_pattern: ceph-osd-[1-3]
>>> spec:
>>>  data_devices:
>>>    rotational: 1
>>>  filter_logic: AND
>>>  objectstore: bluestore
>>> ```
>>> db_devices still doesn't show up.
>>> Keep scratching my head...
>>> 
>>> 
>>> Thanks!
>>> Tony
>>>> -----Original Message-----
>>>> From: Eugen Block <eblock@xxxxxx<mailto:eblock@xxxxxx>>
>>>> Sent: Tuesday, February 2, 2021 2:20 AM
>>>> To: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
>>>> Subject:  Re: db_devices doesn't show up in exported
>>>> osd service spec
>>>> 
>>>> Hi,
>>>> 
>>>> I would recommend to update (again), here's my output from a 15.2.8
>>>> test
>>>> cluster:
>>>> 
>>>> 
>>>> host1:~ # ceph orch ls --service_name osd.default --export
>>>> service_type: osd
>>>> service_id: default
>>>> service_name: osd.default
>>>> placement:
>>>>   hosts:
>>>>   - host4
>>>>   - host3
>>>>   - host1
>>>>   - host2
>>>> spec:
>>>>   block_db_size: 4G
>>>>   data_devices:
>>>>     rotational: 1
>>>>     size: '20G:'
>>>>   db_devices:
>>>>     size: '10G:'
>>>>   filter_logic: AND
>>>>   objectstore: bluestore
>>>> 
>>>> 
>>>> Regards,
>>>> Eugen
>>>> 
>>>> 
>>>> Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> When build cluster Octopus 15.2.5 initially, here is the OSD
>>>>> service spec file applied.
>>>>> ```
>>>>> service_type: osd
>>>>> service_id: osd-spec
>>>>> placement:
>>>>>  host_pattern: ceph-osd-[1-3]
>>>>> data_devices:
>>>>>  rotational: 1
>>>>> db_devices:
>>>>>  rotational: 0
>>>>> ```
>>>>> After applying it, all HDDs were added and DB of each hdd is
>>>>> created on SSD.
>>>>> 
>>>>> Here is the export of OSD service spec.
>>>>> ```
>>>>> # ceph orch ls --service_name osd.osd-spec --export
>>>>> service_type: osd
>>>>> service_id: osd-spec
>>>>> service_name: osd.osd-spec
>>>>> placement:
>>>>>  host_pattern: ceph-osd-[1-3]
>>>>> spec:
>>>>>  data_devices:
>>>>>    rotational: 1
>>>>>  filter_logic: AND
>>>>>  objectstore: bluestore
>>>>> ```
>>>>> Why db_devices doesn't show up there?
>>>>> 
>>>>> When I replace a disk recently, when the new disk was installed
>>>>> and zapped, OSD was automatically re-created, but DB was created
>>>>> on HDD, not SSD. I assume this is because of that missing db_devices?
>>>>> 
>>>>> I tried to update service spec, the same result, db_devices
>>>>> doesn't show up when export it.
>>>>> 
>>>>> Is this some known issue or something I am missing?
>>>>> 
>>>>> 
>>>>> Thanks!
>>>>> Tony
>>>>> _______________________________________________
>>>>> ceph-users mailing list --
>>>>> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send
>>>>> an email to
>>>>> ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
>>>> 
>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list --
>>>> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send
>>>> an email to
>>>> ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list --
>> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
>> _______________________________________________
>> ceph-users mailing list --
>> ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux