You can also just remove the caching from the pool, increase the pgs, then set it back up as a cache pool. It'll require downtime if it's in front of an EC rbd pool or EC cephfs on Jewel or Hammer, but it won't take long as all of the objects will be gone.
Why do you need to increase the PG count of your cache pool?
On Sat, May 27, 2017, 1:30 PM Michael Shuey <shuey@xxxxxxxxxxx> wrote:
I don't recall finding a definitive answer - though it was some time ago. IIRC, it did work but made the pool fragile; I remember having to rebuild the pools for my test rig soon after. Don't quite recall the root cause, though - could have been newbie operator error on my part. May have also had something to do with my cache pool settings; at the time I was doing heavy benchmarking with a limited-size pool, so it's possible I filled the cache pool with data while the pg_num change was going on, causing subtle breakage (despite being explicitly warned to NOT do that).
--
Mike Shuey_______________________________________________On Sat, May 27, 2017 at 8:52 AM, Konstantin Shalygin <k0ste@xxxxxxxx> wrote:# ceph osd pool set cephfs_data_cache pg_num 256
Error EPERM: splits in cache pools must be followed by scrubs and
leave sufficient free space to avoid overfilling. use
--yes-i-really-mean-it to force.
Is there something I need to do, before increasing PGs on a cache
pool? Can this be (safely) done live?
Hello.
You found answer on this question? I can't google anything about this warning.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com