Re: osd_agent_max_ops relating to number of OSDs in the cache pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Gregory Farnum
> Sent: 22 July 2015 15:05
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  osd_agent_max_ops relating to number of OSDs in
> the cache pool
> 
> On Sat, Jul 18, 2015 at 10:25 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > Hi All,
> >
> > I’m doing some testing on the new High/Low speed cache tiering flushing
> and I’m trying to get my head round the effect that changing these 2 settings
> have on the flushing speed.  When setting the osd_agent_max_ops to 1, I
> can get up to 20% improvement before the osd_agent_max_high_ops value
> kicks in for high speed flushing. Which is great for bursty workloads.
> >
> > As I understand it, these settings loosely effect the number of concurrent
> operations the cache pool OSD’s will flush down to the base pool.
> >
> > I may have got completely the wrong idea in my head but I can’t
> understand how a static default setting will work with different cache/base
> ratios. For example if I had a relatively small number of very fast cache tier
> OSD’s (PCI-E SSD perhaps) and a much larger number of base tier OSD’s,
> would the value need to be increased to ensure sufficient utilisation of the
> base tier and make sure that the cache tier doesn’t fill up too fast?
> >
> > Alternatively where the cache tier is based on spinning disks or where the
> base tier is not as comparatively large, this value may need to be reduced to
> stop it saturating the disks.
> >
> > Any Thoughts?
> 
> I'm not terribly familiar with these exact values, but I think you've got it right.
> We can't make decisions at the level of the entire cache pool (because
> sharing that information isn't feasible), so we let you specify it on a per-OSD
> basis according to what setup you have.
> 
> I've no idea if anybody has gathered up a matrix of baseline good settings or
> not.

Thanks for your response. I will run a couple of tests to see if I can work out a rough rule of thumb for the settings. I'm guessing you don't want to do more than 1 or 2 concurrent ops per spinning disk to avoid over loading them. Maybe something like:-

(# Base Tier Disks / Copies) / # Cache Tier Disks = Optimum number of concurrent flush operations

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux