Re: Disabling automatic provisioning of OSD's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Weiwen Hu, your idea pointed me towards looking into the
orchestrator. There I discovered there's a bunch of osd services and it's a
bit of a mess:|

*tinco@automator-1*:*~*$ sudo ceph orch ls osd

NAME                               RUNNING  REFRESHED  AGE  PLACEMENT    IMAGE
NAME                  IMAGE ID

osd.None                              51/0  4m ago     -    <unmanaged>
quay.io/ceph/ceph:v15.2.14  mix

osd.all-available-devices              0/3  -          -
<unmanaged>  <unknown>
                  <unknown>

osd.dashboard-admin-1603406157941      0/2  4m ago     13M  *
quay.io/ceph/ceph:v15.2.14  <unknown>

osd.dashboard-admin-1603812430616      0/1  -          -    big-data-1
  <unknown>
                  <unknown>

osd.dashboard-admin-1603812754993      0/1  -          -    big-data-1
  <unknown>
                  <unknown>


So some of them indeed are unmanaged, and some of them are not. I wanted to
run your idea of updating the unmanaged value in the spec and running it
with --dry-run first, but the dry run feature doesn't seem to work. If I
make no changes to the spec at all, I get this output:

*tinco@automator-1*:*~*$ sudo ceph orch apply -i osd_spec_old.yaml --dry-run

[sudo] password for tinco:

WARNING! Dry-Runs are snapshots of a certain point in time and are bound

to the current inventory setup. If any on these conditions changes, the

preview will be invalid. Please make sure to have a minimal

timeframe between planning and applying the specs.

####################

SERVICESPEC PREVIEWS

####################

+---------+------+--------+-------------+

|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |

+---------+------+--------+-------------+

+---------+------+--------+-------------+

################

OSDSPEC PREVIEWS

################

+---------+------+------+------+----+-----+

|SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |

+---------+------+------+------+----+-----+

+---------+------+------+------+----+-----+



That doesn't look right to me, shouldn't it show all services as they
currently are?

I want to clean up the mess that's currently in there, how do I mark the
existing services for deletion, I couldn't find this in the documentation
of the orchestrator.


Op za 20 nov. 2021 om 08:53 schreef 胡 玮文 <huww98@xxxxxxxxxxx>:

> Hi Tinco,
>
>
>
> I have not tried this myself, but I think you can:
>
>
>
> ceph orch ls osd –export > osd_spec.yaml
>
>
>
> Then edit the yaml file to add a ‘unmanaged: true’ line
>
>
>
> And finally,
>
>
>
> ceph orch apply -i osd_spec.yaml
>
>
>
> When in doubt, add “—dry-run”. You can use “ceph orch ls osd” to check the
> ” PLACEMENT” is “<unmanaged>”
>
>
>
> Weiwen Hu
>
>
>
> *发件人: *Tinco Andringa <tinco@xxxxxxxxxxx>
> *发送时间: *2021年11月21日 0:17
> *收件人: *ceph-users@xxxxxxx
> *主题: * Disabling automatic provisioning of OSD's
>
>
>
> Hi,
>
> I am trying to create a second filesystem not based on Ceph on my systems,
> but when I created the cluster I enabled the automatic provisioning of
> OSD's. Now whenever I insert a new drive into my systems it is
> automatically formatted with LVM and Bluestore and added to the pool. This
> is very nice normally, but right now that's not that I want.
>
> I tried deleting the drive from the cluster, but then when I wipefs it, it
> immediately gets provisioned with an OSD again.
>
> So I found this command in the orchestrator cli manual:
>
> ceph orch apply osd --all-available-devices --unmanaged=true
>
> But running this command did not change the behaviour. Is there perhaps a
> service I need to reload/restart to make this have an effect, or is there
> some kind of bug going on?
>
> I have a workaround where I simply stop ceph on the machine while I
> provision the drives, however this is a bit disrupting, is are a less
> disruptive thing I can do to stop the provisioning? Which service is
> responsible for the provisioning of OSD's? Perhaps I can only disable that
> service.
>
> Kind regards,
> Tinco
>
> p.s.: I noticed one other person has had this problem and it was left
> unresolved in this thread:
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/26QWRREQT524QQ43HDRYJKFSVV7LZ4XS/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>


-- 
*Tinco Andringa *- Head of Software Engineering


tinco@xxxxxxxxxxx / *+31 6 28 25 3338*




Freark Damwei 34, 8914 BM, Leeuwarden
<https://www.google.nl/maps/place/Aeroscan+BV/@53.205596,5.7949583,17z/data=!3m1!4b1!4m5!3m4!1s0x47c9a92513ec3f07:0x3b386450c6893a7a!8m2!3d53.205596!4d5.797147>


Industrieweg 67, 1115 AD, Duivendrecht
<https://www.google.nl/maps/place/Industrieweg+67,+1115+AD+Duivendrecht/@52.3362586,4.9466431,17z/data=!3m1!4b1!4m5!3m4!1s0x47c60be69f023241:0xd52214b2ee647bfa!8m2!3d52.3362586!4d4.9488318>


+31 85 130 49 93


www.aeroscan.nl



*De informatie verzonden in dit e-mail bericht is uitsluitend bestemd voor
de geadresseerde. Gebruik van deze informatie door anderen dan de
geadresseerde is verboden. Indien u dit bericht ten onrechte ontvangt,
wordt u verzocht de inhoud niet te gebruiken maar Aero Scan B.V. direct te
informeren door het bericht te retourneren en het daarna te verwijderen.
Openbaarmaking, vermenigvuldiging, verspreiding en/of verstrekking van de
in de e-mail ontvangen informatie aan derden is niet toegestaan. *
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux