Re: Ceph Bluestore tweaks for Bcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just for completeness for anyone that is following this thread. Igor
added that setting in Octopus, so unfortunately I am unable to use it
as I am still on Nautilus.

Thanks,
Rich

On Wed, 6 Apr 2022 at 10:01, Richard Bade <hitrich@xxxxxxxxx> wrote:
>
> Thanks Igor for the tip. I'll see if I can use this to reduce the
> number of tweaks I need.
>
> Rich
>
> On Tue, 5 Apr 2022 at 21:26, Igor Fedotov <igor.fedotov@xxxxxxxx> wrote:
> >
> > Hi Richard,
> >
> > just FYI: one can use "bluestore debug enforce settings=hdd" config
> > parameter to manually enforce HDD-related  settings for a BlueStore
> >
> >
> > Thanks,
> >
> > Igor
> >
> > On 4/5/2022 1:07 AM, Richard Bade wrote:
> > > Hi Everyone,
> > > I just wanted to share a discovery I made about running bluestore on
> > > top of Bcache in case anyone else is doing this or considering it.
> > > We've run Bcache under Filestore for a long time with good results but
> > > recently rebuilt all the osds on bluestore. This caused some
> > > degradation in performance that I couldn't quite put my finger on.
> > > Bluestore osds have some smarts where they detect the disk type.
> > > Unfortunately in the case of Bcache it detects as SSD, when in fact
> > > the HDD parameters are better suited.
> > > I changed the following parameters to match the HDD default values and
> > > immediately saw my average osd latency during normal workload drop
> > > from 6ms to 2ms. Peak performance didn't change really, but a test
> > > machine that I have running a constant iops workload was much more
> > > stable as was the average latency.
> > > Performance has returned to Filestore or better levels.
> > > Here are the parameters.
> > >
> > >   ; Make sure that we use values appropriate for HDD not SSD - Bcache
> > > gets detected as SSD
> > >   bluestore_prefer_deferred_size = 32768
> > >   bluestore_compression_max_blob_size = 524288
> > >   bluestore_deferred_batch_ops = 64
> > >   bluestore_max_blob_size = 524288
> > >   bluestore_min_alloc_size = 65536
> > >   bluestore_throttle_cost_per_io = 670000
> > >
> > >   ; Try to improve responsiveness when some disks are fully utilised
> > >   osd_op_queue = wpq
> > >   osd_op_queue_cut_off = high
> > >
> > > Hopefully someone else finds this useful.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > --
> > Igor Fedotov
> > Ceph Lead Developer
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io
> >
> > croit GmbH, Freseniusstr. 31h, 81247 Munich
> > CEO: Martin Verges - VAT-ID: DE310638492
> > Com. register: Amtsgericht Munich HRB 231263
> > Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
> >
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux