Cache tier unable to auto flush data to storage tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks JC  , it worked , now cache tiering agent is migrating data between tiers.


But Now , i am seeing a new ISSUE :  Cache-pool has got some EXTRA objects , that is not visible with # rados -p cache-pool ls but under #ceph df i can see the count of those objects.

[root at ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
    NAME                   ID     USED      %USED     OBJECTS
    EC-pool                15     1000M     1.21      2
    cache-pool             16     252       0         3
[root at ceph-node1 ~]#
[root at ceph-node1 ~]# rados -p cache-pool ls
[root at ceph-node1 ~]# rados -p cache-pool  cache-flush-evict-all
[root at ceph-node1 ~]# rados -p cache-pool ls
[root at ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
    NAME                   ID     USED      %USED     OBJECTS
    EC-pool                15     1000M     1.21      2
    cache-pool             16     252       0         3
[root at ceph-node1 ~]#


# Also when i create ONE object manually , #ceph df says that 2 objects has been added. From where this extra object coming

[root at ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
    NAME                   ID     USED      %USED     OBJECTS
    EC-pool                15     1000M     1.21      2
    cache-pool             16     252       0         3
[root at ceph-node1 ~]#
[root at ceph-node1 ~]#
[root at ceph-node1 ~]# rados -p cache-pool put test /etc/hosts        ( I have added one object in this step )
[root at ceph-node1 ~]# rados -p cache-pool ls					( when i list i can see only 1 object that i have recently created)
test
[root at ceph-node1 ~]# ceph df | egrep -i "objects|pool"
POOLS:
    NAME                   ID     USED      %USED     OBJECTS
    EC-pool                15     1000M     1.21      2
    cache-pool             16     651       0         5						(Why it is showing 5 objects  , while earlier its showing 3 Objects , why it has increased by 2  on adding only 1 object )
[root at ceph-node1 ~]#


- Karan -

On 14 Sep 2014, at 03:42, Jean-Charles LOPEZ <jc.lopez at inktank.com> wrote:

> Hi Karan,
> 
> May be setting the dirty byte ratio (flush) and the full ratio (eviction). Just try to see if it makes any difference
> - cache_target_dirty_ratio .1
> - cache_target_full_ratio .2
> 
> Tune the percentage as desired relatively to target_max_bytes and target_max_objects. The first threshold reached will trigger flush or eviction (num objects or num bytes)
> 
> JC
> 
> 
> 
> On Sep 13, 2014, at 15:23, Karan Singh <karan.singh at csc.fi> wrote:
> 
>> Hello Cephers
>> 
>> I have created a Cache pool and looks like cache tiering agent is not able to flush/evict data as per defined policy. However when i manually evict / flush data , it migrates data from cache-tier to storage-tier
>> 
>> Kindly advice if there is something wrong with policy or anything else i am missing.
>> 
>> Ceph Version: 0.80.5
>> OS : Cent OS 6.4
>> 
>> Cache pool created using the following commands :
>> 
>> ceph osd tier add data cache-pool 
>> ceph osd tier cache-mode cache-pool writeback
>> ceph osd tier set-overlay data cache-pool
>> ceph osd pool set cache-pool hit_set_type bloom
>> ceph osd pool set cache-pool hit_set_count 1
>> ceph osd pool set cache-pool hit_set_period 300
>> ceph osd pool set cache-pool target_max_bytes 10000
>> ceph osd pool set cache-pool target_max_objects 100
>> ceph osd pool set cache-pool cache_min_flush_age 60
>> ceph osd pool set cache-pool cache_min_evict_age 60
>> 
>> 
>> [root at ceph-node1 ~]# date
>> Sun Sep 14 00:49:59 EEST 2014
>> [root at ceph-node1 ~]# rados -p data  put file1 /etc/hosts
>> [root at ceph-node1 ~]# rados -p data ls
>> [root at ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root at ceph-node1 ~]#
>> 
>> 
>> [root at ceph-node1 ~]# date
>> Sun Sep 14 00:59:33 EEST 2014
>> [root at ceph-node1 ~]# rados -p data ls
>> [root at ceph-node1 ~]# 
>> [root at ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root at ceph-node1 ~]#
>> 
>> 
>> [root at ceph-node1 ~]# date
>> Sun Sep 14 01:08:02 EEST 2014
>> [root at ceph-node1 ~]# rados -p data ls
>> [root at ceph-node1 ~]# rados -p cache-pool ls
>> file1
>> [root at ceph-node1 ~]#
>> 
>> 
>> 
>> [root at ceph-node1 ~]# rados -p cache-pool  cache-flush-evict-all
>> file1
>> [root at ceph-node1 ~]#
>> [root at ceph-node1 ~]# rados -p data ls
>> file1
>> [root at ceph-node1 ~]# rados -p cache-pool ls
>> [root at ceph-node1 ~]#
>> 
>> 
>> Regards
>> Karan Singh
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140914/012915de/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux