Re: bursty IO, ceph cache pool can not follow evictions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 06/02/2015 07:08 PM, Nick Fisk wrote:
Hi Kenneth,

I suggested an idea which may help with this, it is being currently being
developed .

https://github.com/ceph/ceph/pull/4792

In short there is a high and low threshold with different flushing
priorities. Hopefully this will help with bursty workloads.

Thanks! Will this also increase the absolute flushing speed? Because I think the problem is more the absolute speed.. It is not my workload that is bursty, but the actual processing to the ceph cluster, because the cache flushes slower than new data entering. Now I see my cold storage disks aren't doing a lot of usage (see iostat usage other email), so is there a way to increase the flushing speed by tuning the cache agent for eg parallelism.. ?


Nick

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Kenneth Waegeman
Sent: 02 June 2015 17:54
To: ceph-users@xxxxxxxxxxxxxx
Subject:  bursty IO, ceph cache pool can not follow evictions

Hi,

we were rsync-streaming with 4 cephfs client to a ceph cluster with a
cache
layer upon an erasure coded pool.
This was going on for some time, and didn't have real problems.

Today we added 2 more streams, and very soon we saw some strange
behaviour:
- We are getting blocked requests on our cache pool osds
- our cache pool is often near/ at max ratio
- Our data streams have very bursty IO, (streaming a minute a few hunderds
MB and then nothing)

Our OSDs are not overloaded (nor the ECs nor cache, checked with iostat),
though it seems like the cache pool can not evict objects in time, and get
blocked until that is ok, each time again.
If I rise the target_max_bytes limit, it starts streaming again until it
is full
again.

cache parameters we have are these:
ceph osd pool set cache hit_set_type bloom ceph osd pool set cache
hit_set_count 1 ceph osd pool set cache hit_set_period 3600 ceph osd pool
set cache target_max_bytes $((14*75*1024*1024*1024)) ceph osd pool set
cache cache_target_dirty_ratio 0.4 ceph osd pool set cache
cache_target_full_ratio 0.8


What can be the issue here ? I tried to find some information about the
'cache agent' , but can only find some old references..

Thank you!

Kenneth
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux