Looks quite interesting, i do however think local caching is better done
at the block level (bcache, dm-cache, dm-writecache) rather than in
Ceph. In theory they can deal with a smaller granularity than a Ceph
object + go through the kernel block layer which is more optimized than
a Ceph OSD.
Your results do show favorable comparison with bcache, it will be good
to try to know why this is the case (at least at a high level), i know
cache testing/simulation is not easy to compare two caching methods, but
i think it is important to know why local Ceph caching would be better.
It will also be interesting to compare it with dm-writecache, which is
optimized for writes (using pmem or ssd devices) which is in many cases
the main performance bottleneck as reads can be cached in memory
(assuming you have enough ram).
So i think more tests need to be done, which for caching is not a simple
matter. I believe fio does have a random_distribution=zipf:[theta]
parameter trying to simulate semi real io, as pure serial or pure random
io is not suitable for testing cache.
/Maged
On 11/10/2019 18:04, Honggang(Joseph) Yang wrote:
Hi,
We implemented a new cache tier mode - local mode. In this mode, an
osd is configured to manage two data devices, one is fast device, one
is slow device. Hot objects are promoted from slow device to fast
device, and demoted from fast device to slow device when they become
cold.
The introduction of tier local mode in detail is
https://tracker.ceph.com/issues/42286
tier local mode: https://github.com/yanghonggang/ceph/commits/wip-tier-new
This work is based on ceph v12.2.5. I'm glad to port it to master
branch if needed.
Any advice and suggestions will be greatly appreciated.
thx,
Yang Honggang
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx