Re: db_devices doesn't show up in exported osd service spec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adding ceph-users:

We ran into this same issue, and we used a size specification to workaround
for now.

Bug and patch:

https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083

Backport to Octopus:

https://github.com/ceph/ceph/pull/39171

On Sat, Feb 6, 2021 at 7:05 PM Tony Liu <tonyliu0592@xxxxxxxxxxx> wrote:

> Add dev to comment.
>
> With 15.2.8, when apply OSD service spec, db_devices is gone.
> Here is the service spec file.
> ==========================================
> service_type: osd
> service_id: osd-spec
> placement:
>   hosts:
>   - ceph-osd-1
> spec:
>   objectstore: bluestore
>   data_devices:
>     rotational: 1
>   db_devices:
>     rotational: 0
> ==========================================
>
> Here is the logging from mon. The message with "Tony" is added by me
> in mgr to confirm. The audit from mon shows db_devices is gone.
> Is there anything in mon to filter that out based on host info?
> How can I trace it?
> ==========================================
> audit 2021-02-07T00:45:38.106171+0000 mgr.ceph-control-1.nxjnzz
> (mgr.24142551) 4020 : audit [DBG] from='client.24184218 -'
> entity='client.admin' cmd=[{"prefix": "orch apply osd", "target":
> ["mon-mgr", ""]}]: dispatch
> cephadm 2021-02-07T00:45:38.108546+0000 mgr.ceph-control-1.nxjnzz
> (mgr.24142551) 4021 : cephadm [INF] Marking host: ceph-osd-1 for OSDSpec
> preview refresh.
> cephadm 2021-02-07T00:45:38.108798+0000 mgr.ceph-control-1.nxjnzz
> (mgr.24142551) 4022 : cephadm [INF] Saving service osd.osd-spec spec with
> placement ceph-osd-1
> cephadm 2021-02-07T00:45:38.108893+0000 mgr.ceph-control-1.nxjnzz
> (mgr.24142551) 4023 : cephadm [INF] Tony: spec: <bound method
> ServiceSpec.to_json of
> DriveGroupSpec(name=osd-spec->placement=PlacementSpec(hosts=[HostPlacementSpec(hostname='ceph-osd-1',
> network='', name='')]), service_id='osd-spec', service_type='osd',
> data_devices=DeviceSelection(rotational=1, all=False),
> db_devices=DeviceSelection(rotational=0, all=False), osd_id_claims={},
> unmanaged=False, filter_logic='AND', preview_only=False)>
> audit 2021-02-07T00:45:38.109782+0000 mon.ceph-control-3 (mon.2) 25 :
> audit [INF] from='mgr.24142551 10.6.50.30:0/2838166251'
> entity='mgr.ceph-control-1.nxjnzz' cmd=[{"prefix":"config-key
> set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\":
> \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\":
> [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\":
> \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\":
> {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\":
> \"bluestore\"}}}"}]: dispatch
> audit 2021-02-07T00:45:38.110133+0000 mon.ceph-control-1 (mon.0) 107 :
> audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz'
> cmd=[{"prefix":"config-key
> set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\":
> \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\":
> [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\":
> \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\":
> {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\":
> \"bluestore\"}}}"}]: dispatch
> audit 2021-02-07T00:45:38.152756+0000 mon.ceph-control-1 (mon.0) 108 :
> audit [INF] from='mgr.24142551 ' entity='mgr.ceph-control-1.nxjnzz'
> cmd='[{"prefix":"config-key
> set","key":"mgr/cephadm/spec.osd.osd-spec","val":"{\"created\":
> \"2021-02-07T00:45:38.108810\", \"spec\": {\"placement\": {\"hosts\":
> [\"ceph-osd-1\"]}, \"service_id\": \"osd-spec\", \"service_name\":
> \"osd.osd-spec\", \"service_type\": \"osd\", \"spec\": {\"data_devices\":
> {\"rotational\": 1}, \"filter_logic\": \"AND\", \"objectstore\":
> \"bluestore\"}}}"}]': finished
> ==========================================
>
> Thanks!
> Tony
> > -----Original Message-----
> > From: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx>
> > Sent: Thursday, February 4, 2021 6:31 AM
> > To: ceph-users@xxxxxxx
> > Subject:  Re: db_devices doesn't show up in exported osd
> > service spec
> >
> > Hi.
> >
> > I have the same situation. Running 15.2.8 I created a specification that
> > looked just like it. With rotational in the data and non-rotational in
> > the db.
> >
> > First use applied fine. Afterwards it only uses the hdd, and not the ssd.
> > Also, is there a way to remove an unused osd service.
> > I manages to create osd.all-available-devices, when I tried to stop the
> > autocreation of OSD's. Using ceph orch apply osd --all-available-devices
> > --unmanaged=true
> >
> > I created the original OSD using the web interface.
> >
> > Regards
> >
> > Jens
> > -----Original Message-----
> > From: Eugen Block <eblock@xxxxxx>
> > Sent: 3. februar 2021 11:40
> > To: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> > Cc: ceph-users@xxxxxxx
> > Subject:  Re: db_devices doesn't show up in exported osd
> > service spec
> >
> > How do you manage the db_sizes of your SSDs? Is that managed
> > automatically by ceph-volume? You could try to add another config and
> > see what it does, maybe try to add block_db_size?
> >
> >
> > Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx>:
> >
> > > All mon, mgr, crash and osd are upgraded to 15.2.8. It actually fixed
> > > another issue (no device listed after adding host).
> > > But this issue remains.
> > > ```
> > > # cat osd-spec.yaml
> > > service_type: osd
> > > service_id: osd-spec
> > > placement:
> > >   host_pattern: ceph-osd-[1-3]
> > > data_devices:
> > >   rotational: 1
> > > db_devices:
> > >   rotational: 0
> > >
> > > # ceph orch apply osd -i osd-spec.yaml Scheduled osd.osd-spec
> > > update...
> > >
> > > # ceph orch ls --service_name osd.osd-spec --export
> > > service_type: osd
> > > service_id: osd-spec
> > > service_name: osd.osd-spec
> > > placement:
> > >   host_pattern: ceph-osd-[1-3]
> > > spec:
> > >   data_devices:
> > >     rotational: 1
> > >   filter_logic: AND
> > >   objectstore: bluestore
> > > ```
> > > db_devices still doesn't show up.
> > > Keep scratching my head...
> > >
> > >
> > > Thanks!
> > > Tony
> > >> -----Original Message-----
> > >> From: Eugen Block <eblock@xxxxxx>
> > >> Sent: Tuesday, February 2, 2021 2:20 AM
> > >> To: ceph-users@xxxxxxx
> > >> Subject:  Re: db_devices doesn't show up in exported osd
> > >> service spec
> > >>
> > >> Hi,
> > >>
> > >> I would recommend to update (again), here's my output from a 15.2.8
> > >> test
> > >> cluster:
> > >>
> > >>
> > >> host1:~ # ceph orch ls --service_name osd.default --export
> > >> service_type: osd
> > >> service_id: default
> > >> service_name: osd.default
> > >> placement:
> > >>    hosts:
> > >>    - host4
> > >>    - host3
> > >>    - host1
> > >>    - host2
> > >> spec:
> > >>    block_db_size: 4G
> > >>    data_devices:
> > >>      rotational: 1
> > >>      size: '20G:'
> > >>    db_devices:
> > >>      size: '10G:'
> > >>    filter_logic: AND
> > >>    objectstore: bluestore
> > >>
> > >>
> > >> Regards,
> > >> Eugen
> > >>
> > >>
> > >> Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx>:
> > >>
> > >> > Hi,
> > >> >
> > >> > When build cluster Octopus 15.2.5 initially, here is the OSD
> > >> > service spec file applied.
> > >> > ```
> > >> > service_type: osd
> > >> > service_id: osd-spec
> > >> > placement:
> > >> >   host_pattern: ceph-osd-[1-3]
> > >> > data_devices:
> > >> >   rotational: 1
> > >> > db_devices:
> > >> >   rotational: 0
> > >> > ```
> > >> > After applying it, all HDDs were added and DB of each hdd is
> > >> > created on SSD.
> > >> >
> > >> > Here is the export of OSD service spec.
> > >> > ```
> > >> > # ceph orch ls --service_name osd.osd-spec --export
> > >> > service_type: osd
> > >> > service_id: osd-spec
> > >> > service_name: osd.osd-spec
> > >> > placement:
> > >> >   host_pattern: ceph-osd-[1-3]
> > >> > spec:
> > >> >   data_devices:
> > >> >     rotational: 1
> > >> >   filter_logic: AND
> > >> >   objectstore: bluestore
> > >> > ```
> > >> > Why db_devices doesn't show up there?
> > >> >
> > >> > When I replace a disk recently, when the new disk was installed and
> > >> > zapped, OSD was automatically re-created, but DB was created on
> > >> > HDD, not SSD. I assume this is because of that missing db_devices?
> > >> >
> > >> > I tried to update service spec, the same result, db_devices doesn't
> > >> > show up when export it.
> > >> >
> > >> > Is this some known issue or something I am missing?
> > >> >
> > >> >
> > >> > Thanks!
> > >> > Tony
> > >> > _______________________________________________
> > >> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
> > >> > an email to ceph-users-leave@xxxxxxx
> > >>
> > >>
> > >> _______________________________________________
> > >> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > >> email to ceph-users-leave@xxxxxxx
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux