Hi Eugen,
Thanks for the great feedback. Is there anything specific about the
cache tier itself that you like vs hypothetically having caching live
below the OSDs? There are some real advantages to the cache tier
concept, but eviction over the network has definitely been one of the
tougher aspects of how it works (imho) compared with block level caching.
Mark
On 2/16/22 10:18, Eugen Block wrote:
Hi,
we've noticed the warnings for quite some time now, but we're big fans
of the cache tier. :-)
IIRC we set it up some time around 2015 or 2016 for our production
openstack environment and it works nicely for us. We tried it without
the cache some time after we switched to Nautilus but the performance
was really bad, so we enabled it again. Of course, one could argue
that we could just use SSD OSDs for the cached pool, too. But since
the cache works fine we don't find it necessary to rebuild the entire
pool with larger SSDs.
We're currently sill on Nautilus, we want to upgrade to Octopus soon.
But I think we would vote for keeping the cache tier. :-)
Regards,
Eugen
Zitat von Neha Ojha <nojha@xxxxxxxxxx>:
Hi everyone,
We'd like to understand how many users are using cache tiering and in
which release.
The cache tiering code is not actively maintained, and there are known
performance issues with using it (documented in
https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution).
We are wondering if we can deprecate cache tiering sometime soon.
Thanks,
Neha
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx