On Thu, Apr 9, 2015 at 4:56 AM, Patrik Plank <patrik@xxxxxxxx> wrote: > Hi, > > > i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm > pool. > > these are my settings : > > > ceph osd tier add kvm cache-pool > > ceph osd tier cache-mode cache-pool writeback > > ceph osd tier set-overlay kvm cache-pool > > > ceph osd pool set cache-pool hit_set_type bloom > > ceph osd pool set cache-pool hit_set_count 1 > > ceph osd pool set cache-pool hit set period 3600 > > > ceph osd pool set cache-pool target_max_bytes 751619276800 ^ 750 GB. For 3*512GB disks that's too large a target value. > > ceph osd pool set cache-pool target_max_objects 1000000 > > > ceph osd pool set cache-pool cache_min_flush_age 1800 > > ceph osd pool set cache-pool cache_min_evict_age 600 > > > ceph osd pool cache-pool cache_target_dirty_ratio .4 > > ceph osd pool cache-pool cache target_full_ratio .8 > > > So the problem is, the cache-tier do no evict automatically. > > If i copy some kvm images to the ceph cluster, the cache osds always run > full. > > > Is that normal? > > Is there a miss configuration? > > > thanks > > best regards > > Patrik > > > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com