Re: Ceph Bluestore tweaks for Bcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frank, yes I changed the device class to HDD but there seems to be some
smarts in the background that apply the different settings that are not
based on the class but some other internal mechanism.
However, I did apply the class after creating the osd, rather than during.
If someone knows how to manually specify this, I'd also be interested to
know.

I probably should also have said that I am using Nautilus and it may be
different in newer versions.

Rich


On Tue, 5 Apr 2022, 20:39 Frank Schilder, <frans@xxxxxx> wrote:

> Hi Richard,
>
> I'm planning to use dm_cache with bluestore OSDs on LVM. I was also
> wondering how the device will be detected. I guess if I build the OSD
> before assigning dm_cache space it will use the usual HDD defaults. Did you
> try forcing the OSD to be in class HDD on build? I believe the OSD create
> commands have a flag for that.
>
> If any of the OSD gurus looks at this, could you possibly point to a
> reference about what parameters might need attention in such scenarios and
> what the preferred deployment method would be?
>
> Thanks and best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ________________________________________
> From: Richard Bade <hitrich@xxxxxxxxx>
> Sent: 05 April 2022 00:07:34
> To: Ceph Users
> Subject:  Ceph Bluestore tweaks for Bcache
>
> Hi Everyone,
> I just wanted to share a discovery I made about running bluestore on
> top of Bcache in case anyone else is doing this or considering it.
> We've run Bcache under Filestore for a long time with good results but
> recently rebuilt all the osds on bluestore. This caused some
> degradation in performance that I couldn't quite put my finger on.
> Bluestore osds have some smarts where they detect the disk type.
> Unfortunately in the case of Bcache it detects as SSD, when in fact
> the HDD parameters are better suited.
> I changed the following parameters to match the HDD default values and
> immediately saw my average osd latency during normal workload drop
> from 6ms to 2ms. Peak performance didn't change really, but a test
> machine that I have running a constant iops workload was much more
> stable as was the average latency.
> Performance has returned to Filestore or better levels.
> Here are the parameters.
>
>  ; Make sure that we use values appropriate for HDD not SSD - Bcache
> gets detected as SSD
>  bluestore_prefer_deferred_size = 32768
>  bluestore_compression_max_blob_size = 524288
>  bluestore_deferred_batch_ops = 64
>  bluestore_max_blob_size = 524288
>  bluestore_min_alloc_size = 65536
>  bluestore_throttle_cost_per_io = 670000
>
>  ; Try to improve responsiveness when some disks are fully utilised
>  osd_op_queue = wpq
>  osd_op_queue_cut_off = high
>
> Hopefully someone else finds this useful.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux