Re: bursty IO, ceph cache pool can not follow evictions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Kenneth Waegeman
> Sent: 03 June 2015 10:51
> To: Nick Fisk; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  bursty IO, ceph cache pool can not follow
evictions
> 
> 
> 
> On 06/02/2015 07:08 PM, Nick Fisk wrote:
> > Hi Kenneth,
> >
> > I suggested an idea which may help with this, it is being currently
> > being developed .
> >
> > https://github.com/ceph/ceph/pull/4792
> >
> > In short there is a high and low threshold with different flushing
> > priorities. Hopefully this will help with bursty workloads.
> 
> Thanks! Will this also increase the absolute flushing speed? Because I
think
> the problem is more the absolute speed.. It is not my workload that is
bursty,
> but the actual processing to the ceph cluster, because the cache flushes
> slower than new data entering.
> Now I see my cold storage disks aren't doing a lot of usage (see iostat
usage
> other email), so is there a way to increase the flushing speed by tuning
the
> cache agent for eg parallelism.. ?

To be honest with you, I'm not 100% sure. I see similar issues to you where
performance seems really poor compared to the actual amount of disk
activity. My best hunch is that it's probably to do with overwriting
existing blocks that 1st have to be promoted to the cache tier before being
overwritten. These will always cause a block on the operation as it can't
continue until the object is in the cache tier.

A few things I have also been toying with seem to help in my case, not sure
if they will help in yours though.
1. Making the object size smaller. Ie, for RBD dropping from 4MB object
sizes to 1MB decreases latency of promotion/demotion operations
2. Making the cache tier a lot bigger to reduce promotions/demotions
3. Using something like flashcache in front of the RBD to hide this
promotion latency from the workload

Nick

> 
> >
> > Nick
> >
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> >> Of Kenneth Waegeman
> >> Sent: 02 June 2015 17:54
> >> To: ceph-users@xxxxxxxxxxxxxx
> >> Subject:  bursty IO, ceph cache pool can not follow
> >> evictions
> >>
> >> Hi,
> >>
> >> we were rsync-streaming with 4 cephfs client to a ceph cluster with a
> > cache
> >> layer upon an erasure coded pool.
> >> This was going on for some time, and didn't have real problems.
> >>
> >> Today we added 2 more streams, and very soon we saw some strange
> >> behaviour:
> >> - We are getting blocked requests on our cache pool osds
> >> - our cache pool is often near/ at max ratio
> >> - Our data streams have very bursty IO, (streaming a minute a few
> >> hunderds MB and then nothing)
> >>
> >> Our OSDs are not overloaded (nor the ECs nor cache, checked with
> >> iostat), though it seems like the cache pool can not evict objects in
> >> time, and get blocked until that is ok, each time again.
> >> If I rise the target_max_bytes limit, it starts streaming again until
> >> it
> > is full
> >> again.
> >>
> >> cache parameters we have are these:
> >> ceph osd pool set cache hit_set_type bloom ceph osd pool set cache
> >> hit_set_count 1 ceph osd pool set cache hit_set_period 3600 ceph osd
> >> pool set cache target_max_bytes $((14*75*1024*1024*1024)) ceph osd
> >> pool set cache cache_target_dirty_ratio 0.4 ceph osd pool set cache
> >> cache_target_full_ratio 0.8
> >>
> >>
> >> What can be the issue here ? I tried to find some information about
> >> the 'cache agent' , but can only find some old references..
> >>
> >> Thank you!
> >>
> >> Kenneth
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux