And we are quite happy with our cache tier. When we got new HDD OSDs
we tested if things would improve without the tier but we had to stick
to it, otherwise working with our VMs was almost impossible. But this
is an RBD cache so I can't tell how the other protocols perform with a
cache tier.
Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
Hi,
You can arbitrarily add or remove the cache tier, there's no problem with
that. The problem is that cache tier doesn't work well, I tried it in front
of replicated and EC-pools with very mixed results: when it worked there
wasn't as much of a speed/latency benefit as one would expect from
NVME-based cache, and most of the time it just didn't work with I/O very
obviously hitting the underlying "cold data" pool for no reason. This
behavior is likely why cache tier isn't recommended. I eventually
dismantled the cache tier and used NVME for WAL+DB.
Best regards,
Zakhar
On Mon, Sep 20, 2021 at 7:16 AM Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx> wrote:
Hi,
I'm running out of idea why my wal+db nvmes are maxed out always so
thinking of I might missed the cache tiering in front of my 4:2 ec-pool. IS
it possible to add it later?
There are 9 nodes with 6x 15.3TB SAS ssds, 3x nvme drives. Currently out
of the 3 nvme 1 is used for index pool and meta pool, the other 2 is used
for wal+db in front of 3-3 ssds. Thinking to remove the wal+db nvmes and
add it as a write back cache pool.
The only thing which makes head ache is the description:
https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution
feels like not really suggested to use it :/
Any experience with it?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx