Re: ceph orch osd data_allocate_fraction does not work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like the orchestation side support for this got brought into pacific
with the rest of the drive group stuff, but the actual underlying feature
in ceph-volume (from https://github.com/ceph/ceph/pull/40659) never got a
pacific backport. I've opened the backport now
https://github.com/ceph/ceph/pull/53581 and I think another pacific release
is planned so we can hopefully have it fixed there eventually, but it's
definitely broken as of now.  Sorry about that.

On Thu, Sep 21, 2023 at 7:54 AM Boris Behrens <bb@xxxxxxxxx> wrote:

> I have a use case where I want to only use a small portion of the disk for
> the OSD and the documentation states that I can use
> data_allocation_fraction [1]
>
> But cephadm can not use this and throws this error:
> /usr/bin/podman: stderr ceph-volume lvm batch: error: unrecognized
> arguments: --data-allocate-fraction 0.1
>
> So, what I actually want to achieve:
> Split up a single SSD into:
> 3-5x block.db for spinning disks (5x 320GB or 3x 500GB regarding if I have
> 8TB HDDs or 16TB HDDs)
> 1x SSD OSD (100G) for RGW index / meta pools
> 1x SSD OSD (100G) for RGW gc pool because of this bug [2]
>
> My service definition looks like this:
>
> service_type: osd
> service_id: hdd-8tb
> placement:
>   host_pattern: '*'
> crush_device_class: hdd
> spec:
>   data_devices:
>     rotational: 1
>     size: ':9T'
>   db_devices:
>     rotational: 0
>     limit: 5
>     size: '1T:2T'
>   encrypted: true
>   block_db_size: 320000000000
> ---
> service_type: osd
> service_id: hdd-16tb
> placement:
>   host_pattern: '*'
> crush_device_class: hdd
> spec:
>   data_devices:
>     rotational: 1
>     size: '14T:'
>   db_devices:
>     rotational: 0
>     limit: 1
>     size: '1T:2T'
>   encrypted: true
>   block_db_size: 500000000000
> ---
> service_type: osd
> service_id: gc
> placement:
>   host_pattern: '*'
> crush_device_class: gc
> spec:
>   data_devices:
>     rotational: 0
>     size: '1T:2T'
>   encrypted: true
>   data_allocate_fraction: 0.05
> ---
> service_type: osd
> service_id: ssd
> placement:
>   host_pattern: '*'
> crush_device_class: ssd
> spec:
>   data_devices:
>     rotational: 0
>     size: '1T:2T'
>   encrypted: true
>   data_allocate_fraction: 0.05
>
>
> [1]
>
> https://docs.ceph.com/en/pacific/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.data_allocate_fraction
> [2] https://tracker.ceph.com/issues/53585
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux