Re: cephadm db_slots and wal_slots ignored

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you applied your changes for the osd layout ('ceph orch appply -i specs.yml) you can just zap the devices and ceph will redeploy the OSDs with the desired layout.


Zitat von "Schweiss, Chip" <chip@xxxxxxxxxxxxx>:

That worked!  Thanks!

Now to figure out how to correct all the incorrect OSDs.



On Thu, Jan 21, 2021 at 1:29 AM Eugen Block <eblock@xxxxxx> wrote:

If you use block_db_size and limit in your yaml file, e.g.

block_db_size: 64G  (or whatever you choose)
limit: 6

this should not consume the entire disk but only as much as you
configured. Can you try if that works for you?


Zitat von "Schweiss, Chip" <chip@xxxxxxxxxxxxx>:

> I'm trying to set up a new ceph cluster with cephadm on a SUSE SES trial
> that has Ceph 15.2.8
>
> Each OSD node has 18 rotational SAS disks, 4 NVMe 2TB SSDs for DB, and 2
> NVME2 200GB Optane SSDs for WAL.
>
> These servers will eventually have 24 rotational SAS disks that they will
> inherit from existing storage servers.  So I don't want all the space
used
> on the DB and WAL SSDs.
>
> I suspect from the comment "(db_slots is actually to be favoured here,
but
> it's not implemented yet)" on this page in the docs:
> https://docs.ceph.com/en/latest/cephadm/drivegroups/#the-advanced-case
these
> parameters are not yet implemented, yet are documented as such under
> "ADDITIONAL OPTIONS"
>
> My osd_spec.yml:
> service_type: osd
> service_id: three_tier_osd
> placement:
>   host_pattern: '*'
> data_devices:
>   rotational: 1
>   model: 'ST14000NM0288'
> db_devices:
>   rotational: 0
>   model: 'INTEL SSDPE2KX020T8'
>   limit: 6
> wal_devices:
>   model: 'INTEL SSDPEL1K200GA'
>   limit: 12
> db_slots: 6
> wal_slots: 12
>
> All available space is consumed on my DB and WAL SSDs with only 18 OSDs,
> leaving no room to add additional spindles.
>
> Is this still work in progress, or a bug I should report?  Possibly
related
> to https://github.com/rook/rook/issues/5026  At the minimum, this
appears
> to be a documentation bug.
>
> How can I work around this?
>
> -Chip
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux