Antw: Re: SSS Caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





>>> Christian Balzer <chibi@xxxxxxx> schrieb am Donnerstag, 27. Oktober
2016 um
13:55:

Hi Christian,

> 
> Hello,
> 
> On Thu, 27 Oct 2016 11:30:29 +0200 Steffen Weißgerber wrote:
> 
>> 
>> 
>> 
>> >>> Christian Balzer <chibi@xxxxxxx> schrieb am Donnerstag, 27.
Oktober 2016 um
>> 04:07:
>> 
>> Hi,
>> 
>> > Hello,
>> > 
>> > On Wed, 26 Oct 2016 15:40:00 +0000 Ashley Merrick wrote:
>> > 
>> >> Hello All,
>> >> 
>> >> Currently running a CEPH cluster connected to KVM via the KRBD
and used 
> only 
>> > for this purpose.
>> >> 
>> >> Is working perfectly fine, however would like to look at
increasing / 
>> > helping with random write performance and latency, specially from
multiple 
>> > VM's hitting the spinning disks at same time.
>> >> 
>> > Is it more a question of contention (HDDs being busy) or latency
(lots of
>> > small write I/Os)?
>> > 
>> >> Currently have journals on SSD so helps with a very short burst,
however 
>> > looking into putting some proper SSD "cache" in front.
>> >> 
>> > You will want to read some of the recent and current "cache tier"
threads
>> > here, especially the "cache tiering deprecated in RHCS 2.0" one,
to which
>> > interestingly there hasn't been a single comment by RH or the
devs,
>> > official or otherwise.
>> > 
>> 
>> Would the configuration of a seperat SSD only pool with
corresponding crush
>> rules an option to overcome the "cache tier is or will be
deprecated" 
> problematic?
>> 
> Not really, no.
> 
> A cache-tier can be a very quick band-aid for ALL clients (VMs) w/o
any
> per client configuration and w/o knowledge of client internals. 
> 

Hmm, at least with additional clients one has to recalculate if cache
size is sufficient.

> It can also be MUCH more efficient than a dedicated SSD pool in use
cases
> like mine, where the hot objects
> a) fit nicely in a small space and
> b) are not in several places, so the only course of action would be
to put
> ALL of the OS disk on a SSD pool.
> 

I understand that cache tier best fits your application. But it seems
at the price
that you never update that cluster anymore. At least that's what I
think to read between
the lines of some of your posts in other threads. Is that correct?

Our installation serves a lot of vm's where the I/O is sufficient.
Nevertheless
lower latency is always an enhancement.
But we want to keep in touch with the ceph evolution and would like to
virtualize
also some few database machines that depend more on latency.

We'll see if a ssd pool will be fast enough.

In general it will be difficult to calculate/estimate the right cache
size for a large pool
of vm's with different work load and I/O, I think.

Therefore we'll not configure cache tier.

> Christian
> 

Regards

Steffen

>> So that one could use separat SSD based rbd's in qemu vm's like one
would
>> install a mix of disks in a real PC tower (fast SSD for OS startup
and 
> datases and
>> spinning disks for simple file storage).
>> 
>> [...]
>> 
>> Regards
>> 
>> Steffen
>> 
>> > 
>> > Christian
>> > 
>> >> 3/ Anything I should be aware of when looking into caching?
>> >> 
>> >> Thanks for your time!,
>> >> Ashley
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx 
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> >> 
>> > 
>> > 
>> > -- 
>> > Christian Balzer        Network/Systems Engineer                
>> > chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
>> > http://www.gol.com/ 
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx 
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
>> 
> 
> 
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

-- 
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux