On 9/17/20 12:21 PM, vitalif@xxxxxxxxxx wrote:
It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for metadata in any setup that's used by more than 1 user :). RBD in fact doesn't benefit much from the WAL/DB partition alone because Bluestore never does more writes per second than HDD can do on average (it flushes every 32 writes to the HDD). For RBD, the best thing is bcache.
Even just having the extra burst bandwidth available can be a big win
though, especially in HDD cases with a 64k min_alloc size for the WAL
and SST reads into cache for onode misses.
Just try to fill up your OSDs up to a decent point to see the difference because a lot of objects means a lot of metadata and when there's a lot of metadata it stops fitting in cache. The performance and the performance difference will also depend on whether your HDDs have internal SSD/media cache (a lot of them do even if you're unaware of it).
+1 for hsbench, just be careful and use my repo https://github.com/vitalif/hsbench because the original has at least 2 bugs for now:
1) it only reads first 64KB when benchmarking GETs
2) it reads objects sequentially instead of reading them randomly
The first one actually has a fix waiting to be merged in a someone's pull request, the second is my fix, I can submit a PR later.
Yes, please submit bug fixes! I was waiting for a reply on the read
issue regarding the implementation, but For sequential vs random gets
that should be fairly straightforward (Though I would make it a new mode
switch preferably so we can keep the existing option as well)
Mark
Yes, I agree that there are many knob for fine tuning Ceph performance.
The problem is we don't have data which workload that benefit most from
WAL/DB in SSD vs in same spinning drive and by how much. Does it really
help in a cluster that mostly for object storage/RGW? Or may be just
block storage/RBD workload that benefit most?
IMHO, I think we need some cost-benefit analysis from this because the
cost placing WAL/DB in SSD is quite noticeable (multiple OSD would be
fail when SSD fail and capacity reduced).
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx