Cache tiering and target_max_bytes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 14 Aug 2014, Pawe? Sadowski wrote:
> Hello,
> 
> I've a cluster of 35 OSD (30 HDD, 5 SSD) with cache tiering configured.
> During tests it looks like ceph is not respecting target_max_bytes
> settings. Steps to reproduce:
>  - configure cache tiering
>  - set target_max_bytes to 32G (on hot pool)
>  - write more than 32G of data
>  - nothing happens

Can you 'ceph pg dump pools -f json-pretty' at this point?  And pick a 
random PG in the cache pool and capture the output of 'ceph pg <pgid> 
query'.

Then 'ceph tell osd.* injectargs '--debug-ms 1 --debug-osd 20'.

> If I set target_max_bytes again (to the same value or any other option,
> for example cache_min_evict_age) ceph will start to move data from hot
> to base pool.

Once it starts going, capture an OSD log (/var/log/ceph/ceph-osd.NNN.log) 
for an OSD that is now moving data.

Thanks!
sage

> 
> I'm using ceph in version 0.80.4 (with cherry-picked patch from bug
> http://tracker.ceph.com/issues/8982.
> 
> Is there away to make it work as expected?
> 
> -- 
> PS
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux