Re: Changing pg_num on cache pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you aren't increasing your target max bytes and target full ratio, I wouldn't bother increasing your pgs on the cache pool.  It will not gain any increased size at all as its size is dictated by those settings and not the total size of the cluster.  It will remain as redundant as always.

If you are set on increasing it, which I don't see a need for, then I would recommend removing it as a cache pool, increase the PG count, and then add it back in.  You will need to stop all clients that are accessing the EC rbd pool during this process.


On Sun, May 28, 2017, 10:42 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
On 05/28/2017 09:43 PM, David Turner wrote:

> What are your pg numbers for each pool? Your % used in each pool? And
> number of OSDs?
>
GLOBAL:

     SIZE       AVAIL      RAW USED     %RAW USED

     89380G     74755G       14625G         16.36

POOLS:

     NAME               ID     USED       %USED     MAX AVAIL     OBJECTS

     replicated_rbd     1       3305G     12.10        24007G      850006

     ec_rbd             2       2674G      5.83        43212G      686555

     ec_cache           3      82281M      0.33        24007G       20765


pool 1 'replicated_rbd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 256 pgp_num 256 last_change 218 flags hashpspool stripe_width 0

         removed_snaps [1~3,5~2,8~2,e~2]

pool 2 'ec_rbd' erasure size 5 min_size 4 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 188 lfor 107 flags hashpspool tiers 3 read_tier 3 write_tier 3 stripe_width 4128

         removed_snaps [1~5]

pool 3 'ec_cache' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 16 pgp_num 16 last_change 1117 flags hashpspool,incomplete_clones tier_of 2 cache_mode writeback target_bytes 107400000000 target_objects 40000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 0s x1 decay_rate 0 search_last_n 0 stripe_width 0

         removed_snaps [1~5]


ec_cache settings:
ceph osd pool set ec_cache target_max_bytes 107400000000 # 100Gb
ceph osd pool set ec_cache cache_target_dirty_ratio 0.3
ceph osd pool set ec_cache cache_target_dirty_high_ratio 0.6
ceph osd pool set ec_cache cache_target_full_ratio 0.8
ceph osd pool set ec_cache target_max_objects 40000

Number of OSD is: 6x osd nodes * 4 osd = 24 OSDs.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux