Re: Adding cache tier to an existing objectstore cluster possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My experience was that placing DB+WAL on NVME provided a much better and
much more consistent boost to a HDD-backed pool than a cache tier. My
biggest grief with the cache tier was its unpredictable write performance,
when it would cache some writes and then immediately not cache some others
seemingly at random, and we couldn't affect this behavior with any
settings, both well and not so well documented. Read cache performance was
somewhat more predictable, but not nearly at the level our enterprise NVME
drives could provide. Then I asked about this on IRC and the feedback I got
was basically "it is what it is, avoid using cache tier".
Zakhar
On Mon, Sep 20, 2021 at 9:56 AM Eugen Block <eblock@xxxxxx> wrote:

> And we are quite happy with our cache tier. When we got new HDD OSDs
> we tested if things would improve without the tier but we had to stick
> to it, otherwise working with our VMs was almost impossible. But this
> is an RBD cache so I can't tell how the other protocols perform with a
> cache tier.
>
>
> Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
>
> > Hi,
> >
> > You can arbitrarily add or remove the cache tier, there's no problem with
> > that. The problem is that cache tier doesn't work well, I tried it in
> front
> > of replicated and EC-pools with very mixed results: when it worked there
> > wasn't as much of a speed/latency benefit as one would expect from
> > NVME-based cache, and most of the time it just didn't work with I/O very
> > obviously hitting the underlying "cold data" pool for no reason. This
> > behavior is likely why cache tier isn't recommended. I eventually
> > dismantled the cache tier and used NVME for WAL+DB.
> >
> > Best regards,
> > Zakhar
> >
> > On Mon, Sep 20, 2021 at 7:16 AM Szabo, Istvan (Agoda) <
> > Istvan.Szabo@xxxxxxxxx> wrote:
> >
> >> Hi,
> >>
> >> I'm running out of idea why my wal+db nvmes are maxed out always so
> >> thinking of I might missed the cache tiering in front of my 4:2
> ec-pool. IS
> >> it possible to add it later?
> >> There are 9 nodes with 6x 15.3TB SAS ssds, 3x nvme drives. Currently out
> >> of the 3 nvme 1 is used for index pool and meta pool, the other 2 is
> used
> >> for wal+db in front of 3-3 ssds. Thinking to remove the wal+db nvmes and
> >> add it as a write back cache pool.
> >>
> >> The only thing which makes head ache is the description:
> >>
> https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution
> >> feels like not really suggested to use it :/
> >>
> >> Any experience with it?
> >>
> >> Thank you.
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux