That is an interesting point. We are using 12 on 1 nvme journal for our
Filestore nodes (which seems to work ok). The workload for wal + db is
different so that could be a factor. However when I've looked at the IO
metrics for the nvme it seems to be only lightly loaded, so does not
appear to be the issue (at 1st sight anyway).
Also the particular nvme model could be a factor (in the 6 vs 12 area)
- what type are you using?
regards
Mark
On 26/06/20 8:59 pm, Zhenshi Zhou wrote:
From my point of view, it's better to have no more than 6 osd wal/db
on 1 nvme.
I think that's the root cause of the slow requests, maybe.
Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx
<mailto:mark.kirkwood@xxxxxxxxxxxxxxx>> 于2020年6月26日周五 上午7:47写道:
Progress update:
- tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests
- will increase osd_memory_target from 4 to 16G, and observe
On 24/06/20 1:30 pm, Mark Kirkwood wrote:
> Hi,
>
> We have recently added a new storage node to our Luminous (12.2.13)
> cluster. The prev nodes are all setup as Filestore: e.g 12 osds
on hdd
> (Seagate Constellations) with one NVMe (Intel P4600) journal.
With the
> new guy we decided to introduce Bluestore so it is configured as:
> (same HW) 12 osd with data on hdd and db + wal on one NVMe.
>
> We noticed there are periodic slow requests logged, and the
implicated
> osds are the Bluestore ones 98% of the time! This suggests that we
> need to tweak our Bluestore settings in some way. Investigating I'm
> seeing:
>
> - A great deal of rocksdb debug info in the logs - perhaps we
should
> tone that down? (debug_rocksdb 4/5 -> 1/5)
>
> - We look to have the default cache settings
> (bluestore_cache_size_hdd|ssd etc), we have memory to increase these
>
> - There are some buffered io settings (bluefs_buffered_io,
> bluestore_default_buffered_write), set to (default) false. Are
these
> safe (or useful) to change?
>
> - We have default rocksdb options, should some of these be changed?
> (bluestore_rocksdb_options, in particular
max_background_compactions=2
> - should we have less, or more?)
>
> Also, anything else we should be looking at?
>
> regards
>
> Mark
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx