On 12/03/2021 17:28, Philip Brown wrote:
"First it is not a good idea to mix SSD/HDD OSDs in the same pool,"
Sorry for not being explicit.
I used the cephadm/ceph orch facilities and told them "go set up all my disks".
SO they automatically set up the SSDs to be WAL devices or whatever.
I think the situation is basically the same: your test results in too
much queue depth with random write on the HDDs and they probably
saturate causing high sporadic tail latency, try mapping rbd with queue
depth limit as stated earlier to limit this but again if this workload
is what you expect in production then consider using SSDs.
as a side issue, i do not know how cephadm would configure the 2 x 100
GB SSDs for wal/db serving the 8 HDDs, you need over 30 GB partition
size else it would result in db mostly on slow HDDs.
/maged
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx