Hi,
understandable, but we played with the PGs of our rbd cache pool a
couple of times, last time was year ago (although it only has roughly
200 GB of data in it). We haven't noticed any issues. But to be fair,
we only use this one cache tier. And there's probably a reason why
it's not recommended.
Zitat von Daniel Persson <mailto.woden@xxxxxxxxx>:
Hi Eugen.
I've tried. The system says it's not recommended but I may force it.
Forcing something with the risk of losing data is not something I'm going
to do.
Best regards
Daniel
On Sat, Mar 26, 2022 at 8:55 PM Eugen Block <eblock@xxxxxx> wrote:
Hi,
just because the autoscaler doesn’t increase the pg_num doesn’t mean
you can’t increase it manually. Have you tried that?
Zitat von Daniel Persson <mailto.woden@xxxxxxxxx>:
> Hi Team.
>
> We are currently in the process of changing the size of our cache pool.
> Currently it's set to 32 PGs and distributed weirdly on our OSDs. The
> system has tried automatically to scale it up to 256 PGs without
succeeding
> and I read that cache pools are not automatically scaled so we are in the
> process of scaling. Our plan is to remove the old one and create a new
one
> with more PGs.
>
> I've run the pool in readproxy now for a week so most of the objects
should
> be available in cold storage but I want to be totally sure so we don't
lose
> any data.
>
> I read in the documentation that you could remove the overlay and that
> would redirect clients to cold storage.
>
> Is a preferred strategy to remove the overlay and then run
> cache-flush-evict-all to clear it and then replace or should I be fine
just
> to remove overlay and tiering and replace it with a new pool?
>
> Currently we have configured it to have a write caching of 0.5 hours and
> read cache of 2 days.
>
> ------
> ceph osd pool set cephfs_data_cache cache_min_flush_age 1800
> ceph osd pool set cephfs_data_cache cache_min_evict_age 172800
> ----
>
> The cache is still 25Tb in size and would be sad to lose if we have
> unwritten data.
>
> Best regards
> Daniel
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx