1. kvm and cache-pool may contain the same object at the same time. In other words, a object may use actual disk capacity in kvm pool and cache-pool. So I think 644245094400 is also too large for cache-pool. You should calculate the max total object size in kvm first. 2. There is a bug in cache pool. cache pool may not evcit object based on cache_min_flush_age. If you put too much objects in a short time, it may raise the bug. There is a patch for the bug https://github.com/ceph/ceph/pull/2856. But this patch is not backport currently. 2015-04-09 22:15 GMT+08:00 Patrik Plank <patrik@xxxxxxxx>: > Hi, > > > set the cache-tier size to 644245094400. > > This should work. > > But it is the same. > > > thanks > > regards > > > -----Original message----- > From: Gregory Farnum <greg@xxxxxxxxxxx> > Sent: Thursday 9th April 2015 15:44 > To: Patrik Plank <patrik@xxxxxxxx> > Cc: ceph-users@xxxxxxxxxxxxxx > Subject: Re: cache-tier do not evict > > On Thu, Apr 9, 2015 at 4:56 AM, Patrik Plank <patrik@xxxxxxxx> wrote: >> Hi, >> >> >> i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm >> pool. >> >> these are my settings : >> >> >> ceph osd tier add kvm cache-pool >> >> ceph osd tier cache-mode cache-pool writeback >> >> ceph osd tier set-overlay kvm cache-pool >> >> >> ceph osd pool set cache-pool hit_set_type bloom >> >> ceph osd pool set cache-pool hit_set_count 1 >> >> ceph osd pool set cache-pool hit set period 3600 >> >> >> ceph osd pool set cache-pool target_max_bytes 751619276800 > > ˆ 750 GB. For 3*512GB disks that's too large a target value. > >> >> ceph osd pool set cache-pool target_max_objects 1000000 >> >> >> ceph osd pool set cache-pool cache_min_flush_age 1800 >> >> ceph osd pool set cache-pool cache_min_evict_age 600 >> >> >> ceph osd pool cache-pool cache_target_dirty_ratio .4 >> >> ceph osd pool cache-pool cache target_full_ratio .8 >> >> >> So the problem is, the cache-tier do no evict automatically. >> >> If i copy some kvm images to the ceph cluster, the cache osds always run >> full. >> >> >> Is that normal? >> >> Is there a miss configuration? >> >> >> thanks >> >> best regards >> >> Patrik >> >> >> >> >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Regards, xinze _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com