Re: Cache pools at or near target size but no evict happen

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

I think I found the root cause. The actual object numbers are below 500k
that I configured. But there are some ghost objects from "ceph df"
triggers the alarm.
I will keep monitor it and post result here.

$ rados -p cached-hdd-cache ls | wc -l
444654
$ ceph df | grep -e "POOL\|cached-hdd"
POOLS:
    POOL                 ID     STORED      OBJECTS     USED        %USED
  MAX AVAIL
    cached-hdd           24     1.4 TiB       1.52M     4.3 TiB      1.79
     78 TiB
    cached-hdd-cache     25     399 GiB     890.80k     798 GiB     14.21
    2.4 TiB

This similar to one previous case from me:
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/JUSZC6UNOTTALA3JFZ24OODBPYJHWELT/

Regs,
Icy



On Fri, 29 May 2020 at 20:24, Eugen Block <eblock@xxxxxx> wrote:

> Can you manually flush/evict the cache? Maybe reduce max_target_bytes
> and max_target_objects to see if that triggers anything. We use
> cache_mode writeback, maybe give that a try?
> I don't see many differences between our and your cache tier config,
> except for cache_mode and we don't have a max_target_objects limit
> set, only max_target_bytes. Is it possible that none of the objects
> are getting marked dirty?
>
> Do you see "dirty" entries in ceph df detail?
>
>
> Zitat von icy chan <icy.kf.chan@xxxxxxxxx>:
>
> > Hi Eugen,
> >
> > Sorry for the missing information. "cached-hdd-cache" is the overlay tier
> > of "cached-hdd" and configured in "readproxy" mode.
> >
> > $ ceph osd dump | grep cached-hdd
> > pool 24 'cached-hdd' replicated size 3 min_size 2 crush_rule 1
> object_hash
> > rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 27242
> lfor
> > 22992/22992/22992 flags hashpspool,selfmanaged_snaps tiers 25 read_tier
> 25
> > write_tier 25 stripe_width 0 application rbd
> > pool 25 'cached-hdd-cache' replicated size 3 min_size 1 crush_rule 3
> > object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn
> last_change
> > 30376 lfor 22992/30306/30304 flags
> > hashpspool,incomplete_clones,selfmanaged_snaps tier_of 24 cache_mode
> > readproxy target_bytes 1099511627776 target_objects 500000 hit_set
> > bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1200s x4
> > decay_rate 20 search_last_n 1 min_read_recency_for_promote 1
> > min_write_recency_for_promote 1 stripe_width 0
> >
> > Regs,
> > Icy
> >
> >
> >
> > On Thu, 28 May 2020 at 18:25, Eugen Block <eblock@xxxxxx> wrote:
> >
> >> I don't see a cache_mode enabled on the pool, did you set one?
> >>
> >>
> >>
> >> Zitat von icy chan <icy.kf.chan@xxxxxxxxx>:
> >>
> >> > Hi,
> >> >
> >> > I had configured a cache tier with max object counts 500k. But no
> evict
> >> > happens when the object counts hit the configured maximum.
> >> > Anyone experienced this issue? What should I do?
> >> >
> >> > $ ceph health detail
> >> > HEALTH_WARN 1 cache pools at or near target size
> >> > CACHE_POOL_NEAR_FULL 1 cache pools at or near target size
> >> >     cache pool 'cached-hdd-cache' with 887.11k objects at/near target
> max
> >> > 500k objects
> >> >
> >> > $ ceph df | grep -e "POOL\|cached-hdd"
> >> > POOLS:
> >> >     POOL                 ID     STORED      OBJECTS     USED
> %USED
> >> >   MAX AVAIL
> >> >     cached-hdd           24     1.4 TiB       1.52M     1.4 TiB
> 0.60
> >> >      78 TiB
> >> >     cached-hdd-cache     25     842 GiB     887.14k     842 GiB
>  15.97
> >> >     1.4 TiB
> >> >
> >> > $ ceph osd pool get cached-hdd-cache all
> >> > size: 3
> >> > min_size: 1
> >> > pg_num: 128
> >> > pgp_num: 128
> >> > crush_rule: nvme-repl-rule
> >> > hashpspool: true
> >> > nodelete: false
> >> > nopgchange: false
> >> > nosizechange: false
> >> > write_fadvise_dontneed: false
> >> > noscrub: false
> >> > nodeep-scrub: false
> >> > hit_set_type: bloom
> >> > hit_set_period: 1200
> >> > hit_set_count: 4
> >> > hit_set_fpp: 0.05
> >> > use_gmt_hitset: 1
> >> > target_max_objects: 500000
> >> > target_max_bytes: 1099511627776
> >> > cache_target_dirty_ratio: 0
> >> > cache_target_dirty_high_ratio: 0.7
> >> > cache_target_full_ratio: 0.9
> >> > cache_min_flush_age: 0
> >> > cache_min_evict_age: 0
> >> > min_read_recency_for_promote: 1
> >> > min_write_recency_for_promote: 1
> >> > fast_read: 0
> >> > hit_set_grade_decay_rate: 20
> >> > hit_set_search_last_n: 1
> >> > pg_autoscale_mode: warn
> >> >
> >> >
> >> > Regs,
> >> > Icy
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux