Bluestore performance tuning for hdd with nvme db+wal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have recently added a new storage node to our Luminous (12.2.13) cluster. The prev nodes are all setup as Filestore: e.g 12 osds on hdd (Seagate Constellations) with one NVMe (Intel P4600) journal. With the new guy we decided to introduce Bluestore so it is configured as: (same HW) 12 osd with data on hdd and db + wal on one NVMe.

We noticed there are periodic slow requests logged, and the implicated osds are the Bluestore ones 98% of the time! This suggests that we need to tweak our Bluestore settings in some way. Investigating I'm seeing:

- A great deal of rocksdb debug info in the logs - perhaps we should tone that down? (debug_rocksdb 4/5 -> 1/5)

- We look to have the default cache settings (bluestore_cache_size_hdd|ssd etc), we have memory to increase these

- There are some buffered io settings (bluefs_buffered_io, bluestore_default_buffered_write), set to (default) false. Are these safe (or useful) to change?

- We have default rocksdb options, should some of these be changed? (bluestore_rocksdb_options, in particular max_background_compactions=2 - should we have less, or more?)

Also, anything else we should be looking at?

regards

Mark

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux