Re: Cache Tiering with Same Cache Pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 23 Jun 2016 14:28:30 +0700 Lazuardi Nasution wrote:

> Hi Christian,
> 
> So, it seem that at first I must set target_max_bytes with Max. Available
> size devided by the number opf cache pools and with calculation of worst
> OSDs down possibility, isn't it? 

Correct.

>And then after some while, I adjust
> target_max_bytes per cache pool by monitoring "ceph df detail" output to
> see which one should have more size and which one should have less size,
> but total still must not more than Max. Available size reduced by worst
> OSDs down percentage.
> 
Again, if you have existing pools that you want to add cache pools to, you
should already have some idea of their I/O needs from the reads and writes
in df detail or the other tools I mentioned.
And thus how to size them respectively.

A cache pool that gets full the quickest might have been caused by a
single large copy, you want to look at IOPS first, then data volume.

> By the way, since there is no maximum age before the object is flushed
> (dirty) or evicted (clean), is lowering hit_set_period will helpful?
> 
I'm not sure what you're asking here, as hit_set_period only affects
promotions, not flushes or evictions. 

And if probably want to set minimum ages, depending on your usage patterns
and cache size.

Christian
> Best regards,
> 
> On Thu, Jun 23, 2016 at 7:23 AM, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> >
> > Hello,
> >
> > On Wed, 22 Jun 2016 15:40:40 +0700 Lazuardi Nasution wrote:
> >
> > > Hi Christian,
> > >
> > > If I have several cache pool on the same SSD OSDs (by using same
> > > ruleset) so those cache pool always show same Max. Available of
> > > "ceph df detail" output,
> >
> > That's true for all pools that share the same backing storage.
> >
> > >what should I put on target_max_bytes of cache tiering
> > > configuration for each cache pool? should it be same and use Max
> > > Available size?
> >
> > Definitely not, you will want to at least subtract enough space from
> > your available size to avoid having one failed OSD generating a full
> > disk situation. Even more to cover a failed host scenario.
> > Then you want to divide the rest by the number of pools you plan to
> > put on there and set that as the target_max_bytes in the simplest case.
> >
> > >If diffrent, how can I know if such cache pool need more
> > > size than other.
> > >
> > By looking at df detail again, the usage is per pool after all.
> >
> > But a cache pool will of course use all the space it has, so that's
> > not a good way to determine your needs.
> > Watching how fast they fill up may be more helpful.
> >
> > You should have decent idea before doing cache tiering about your
> > needs, by monitoring the pools (and their storage) you want to cache,
> > again with "df detail" (how many writes/reads?), "ceph -w", atop or
> > iostat, etc.
> >
> > Christian
> >
> > > Best regards,
> > >
> > > Date: Mon, 20 Jun 2016 09:34:05 +0900
> > > > From: Christian Balzer <chibi@xxxxxxx>
> > > > To: ceph-users@xxxxxxxxxxxxxx
> > > > Cc: Lazuardi Nasution <mrxlazuardin@xxxxxxxxx>
> > > > Subject: Re:  Cache Tiering with Same Cache Pool
> > > > Message-ID: <20160620093405.732f55d8@xxxxxxxxxxxxxxxxxx>
> > > > Content-Type: text/plain; charset=US-ASCII
> > > >
> > > > On Mon, 20 Jun 2016 00:14:55 +0700 Lazuardi Nasution wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Is it possible to do cache tiering for some storage pools with
> > > > > the same cache pool?
> > > >
> > > > As mentioned several times on this ML, no.
> > > > There is a strict 1:1 relationship between base and cache pools.
> > > > You can of course (if your SSDs/NVMes are large and fast enough)
> > > > put more than one cache pool on them.
> > > >
> > > > > What will happen if cache pool is broken or at least doesn't
> > > > > meet quorum when storage pool is OK?
> > > > >
> > > > With a read-only cache pool nothing should happen, as all writes
> > > > are going to the base pool.
> > > >
> > > > In any other mode (write-back, read-forward or read-proxy) your
> > > > hottest objects are likely to be ONLY on the cache pool and never
> > > > getting flushed to the base pool.
> > > > So that means, if your cache pool fails, your cluster is
> > > > essentially dead or at the very least has suffered massive data
> > > > loss.
> > > >
> > > > Something to very much think about when doing cache tiering.
> > > >
> > > > Christian
> > > > --
> > > > Christian Balzer        Network/Systems Engineer
> > > > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> > > > http://www.gol.com/
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux