Re: RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 19, 2023 at 3:58 PM Engelmann Florian
<florian.engelmann@xxxxxxxxxxxx> wrote:
>
> Hi Ilya,
>
> thank you for your fast response! Those mkfs parameters I knew, but the possibility to exclude discard from rbd QoS was new to me. It looks like this option is not available with pacific, but with quincy. So we have to upgrade our clusters first.

Upgrading the cluster itself (monitors, OSDs, etc) isn't necessary, you
can upgrade just client nodes.

>
> Is it possible to exclude discard by default for ALL rbd images (or all images in a pool) or is it a "per image" setting? If it is a "per rbd image" setting, we will have to extend cinder (openstack) to support it.

Like most RBD options, rbd_qos_exclude_ops can be a per-image, per-pool
or cluster-wide setting.  See "rbd config image/pool/global ..." groups
of commands.

Thanks,

                Ilya

>
> All the best,
> Florian
>
> ________________________________________
> From: Ilya Dryomov <idryomov@xxxxxxxxx>
> Sent: Wednesday, July 19, 2023 3:16:20 PM
> To: Engelmann Florian
> Cc: ceph-users@xxxxxxx
> Subject: Re:  RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance
>
> On Wed, Jul 19, 2023 at 11:01 AM Engelmann Florian
> <florian.engelmann@xxxxxxxxxxxx> wrote:
> >
> > Hi,
> >
> > I noticed an incredible high performance drop with mkfs.ext4 (as well as mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or rbd_qos_bps_limit).
> >
> > Baseline: 4TB rbd volume  rbd_qos_write_bps_limit = 0
> > mkfs.ext4:
> > real    0m6.688s
> > user    0m0.000s
> > sys     0m0.006s
> >
> > 50GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 53687091200
> > mkfs.ext4:
> > real    1m22.217s
> > user    0m0.009s
> > sys     0m0.000s
> >
> > 5GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 5368709120
> > mkfs.ext4:
> > real    13m39.770s
> > user    0m0.008s
> > sys     0m0.034s
> >
> > 500MB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 524288000
> > mkfs.ext4:
> > test still runing... I can provide the result if needed.
> >
> > The tests are running on a client vm (Ubuntu 22.04) using Qemu/libvirt.
> >
> > Using the same values with Qemu/libvirt QoS does not affect mkfs performance.
> > https://libvirt.org/formatdomain.html#block-i-o-tuning
> >
> > Ceph Version: 16.2.11
> > Qemu: 6.2.0
> > Libvirt: 8.0.0
> > Kernel (hypervisor host): 5.19.0-35-generic
> > librbd1 (hypervisor host): 17.2.5
> >
> > Could anyone pls confirm and explain what's going on?
>
> Hi Florian,
>
> RBD QoS write limits apply to all write-like operations, including
> discards.  By default, both mkfs.ext4 and mkfs.xfs attempt to discard
> the entire partition/device and librbd QoS machinery treats that as 4TB
> worth of writes.
>
> RBD images are thin-provisioned, so if you are creating a filesystem on
> a freshly created image, you can skip discarding with "-E nodiscard" for
> mkfs.ext4 or "-K" for mkfs.xfs.
>
> Alternatively, you can waive QoS limits for discards (or even an
> arbitrary combination of operations) by setting rbd_qos_exclude_ops
> option [1] appropriately.
>
> [1] https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#confval-rbd_qos_exclude_ops
>
> Thanks,
>
>                 Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux