Re: Erasure pool performance expectations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 16 May 2016 13:14:29 +0200 Peter Kerdisle wrote:

see all the way down.

> On Mon, May 16, 2016 at 12:20 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> 
> >
> >
> > > -----Original Message-----
> > > From: Peter Kerdisle [mailto:peter.kerdisle@xxxxxxxxx]
> > > Sent: 16 May 2016 11:04
> > > To: Nick Fisk <nick@xxxxxxxxxx>
> > > Cc: ceph-users@xxxxxxxxxxxxxx
> > > Subject: Re:  Erasure pool performance expectations
> > >
> > >
> > > On Mon, May 16, 2016 at 11:58 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > > > -----Original Message-----
> > > > From: Peter Kerdisle [mailto:peter.kerdisle@xxxxxxxxx]
> > > > Sent: 16 May 2016 10:39
> > > > To: nick@xxxxxxxxxx
> > > > Cc: ceph-users@xxxxxxxxxxxxxx
> > > > Subject: Re:  Erasure pool performance expectations
> > > >
> > > > I'm forcing a flush by lower the cache_target_dirty_ratio to a
> > > > lower
> > value.
> > > > This forces writes to the EC pool, these are the operations I'm
> > > > trying
> > to
> > > > throttle a bit. I am understanding you correctly that's throttling
> > only works
> > > for
> > > > the other way around? Promoting cold objects into the hot cache?
> > >
> > > Yes that’s correct. You want to throttle the flushes which is done by
> > another
> > > setting(s)
> > >
> > > Firstly set something like this in your ceph.conf
> > > osd_agent_max_low_ops = 1
> > > osd_agent_max_ops = 4
> > > I did not know about this, that's great, I will play around with
> > > these.
> > >
> > >
> > > This controls how many parallel threads the tiering agent will use.
> > > You
> > can
> > > bump them up later if needed.
> > >
> > > Next set on your cache pools, these two settings. Try and keep them
> > > about .2 apart. So something like .4 and .6 are good to start with.
> > > cache_target_dirty_ratio
> > > cache_target_dirty_high_ratio
> > > Here is actually the heart of the matter. Ideally I would love to
> > > run it
> > at 0.0 if
> > > that makes sense. I want no dirty objects in my hot cache at all, has
> > anybody
> > > ever tried this? Right now I'm just pushing cache_target_dirty_ratio
> > during
> > > low activity moments by setting it to 0.2 and then bringing it back
> > > up
> > to 0.6
> > > when it's done or activity starts up again.
> >
> > You might want to rethink that slightly. Keeping in mind that with EC
> > pools currently any write will force a promotion and then dirty the
> > object. If you are then almost immediately flushing the objects back
> > down, you are going to end up with a lot of amplification for writes.
> > You want to keep dirty objects in the cache pool so that you don't
> > incur this penalty if they are going to be written to again in the
> > near future.
> >
> 
> After reading this again it does raise an other question. I'm not sure I
> understood promotions correctly before this response. A promotion is the
> act of moving a piece from the EC pool onto the cache tier, is this
> correct? If so does a promotion remove the actual data from the EC pool
> and make it dirty again? I was under the impression it was still on the
> EC pool but also on the cache tier at that point.
> 
> >
> > I'm guessing what you want is a buffer, so that you can have bursts of
> > activity without incurring the performance penalty of flushing? That’s
> > hopefully what the high and low flushes should give you. By setting
> > to .4 and .6, you will have a .2x"Cache Tier Capacity" buffer of cache
> > tier space, where only slow flushing will occur.
> >
> 
> This is indeed one of the reasons. The other reason was that I thought
> that by removing dirty objects I didn't need replication on the cache
> tier, which I'm now starting to doubt again...

You absolutely want your cache tier to have sufficient replication.
2 at the very least, if your nodes, network are highly resilient and your
SSDs are very endurable and reliable (and have their wear levels
monitored).

Any written data is only in the cache, until it gets evicted. 
And unless you set your dirty ratio to 0, it will stay there forever if it
is hot enough (frequently written to).

But that setting of 0 still leaves you with a failure window AND reduces
your performance massively of course (to the backing store level
basically).

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux