Hi, You can arbitrarily add or remove the cache tier, there's no problem with that. The problem is that cache tier doesn't work well, I tried it in front of replicated and EC-pools with very mixed results: when it worked there wasn't as much of a speed/latency benefit as one would expect from NVME-based cache, and most of the time it just didn't work with I/O very obviously hitting the underlying "cold data" pool for no reason. This behavior is likely why cache tier isn't recommended. I eventually dismantled the cache tier and used NVME for WAL+DB. Best regards, Zakhar On Mon, Sep 20, 2021 at 7:16 AM Szabo, Istvan (Agoda) < Istvan.Szabo@xxxxxxxxx> wrote: > Hi, > > I'm running out of idea why my wal+db nvmes are maxed out always so > thinking of I might missed the cache tiering in front of my 4:2 ec-pool. IS > it possible to add it later? > There are 9 nodes with 6x 15.3TB SAS ssds, 3x nvme drives. Currently out > of the 3 nvme 1 is used for index pool and meta pool, the other 2 is used > for wal+db in front of 3-3 ssds. Thinking to remove the wal+db nvmes and > add it as a write back cache pool. > > The only thing which makes head ache is the description: > https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution > feels like not really suggested to use it :/ > > Any experience with it? > > Thank you. > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx