Re: SSS Caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 26 Oct 2016 15:40:00 +0000 Ashley Merrick wrote:

> Hello All,
> 
> Currently running a CEPH cluster connected to KVM via the KRBD and used only for this purpose.
> 
> Is working perfectly fine, however would like to look at increasing / helping with random write performance and latency, specially from multiple VM's hitting the spinning disks at same time.
> 
Is it more a question of contention (HDDs being busy) or latency (lots of
small write I/Os)?

> Currently have journals on SSD so helps with a very short burst, however looking into putting some proper SSD "cache" in front.
> 
You will want to read some of the recent and current "cache tier" threads
here, especially the "cache tiering deprecated in RHCS 2.0" one, to which
interestingly there hasn't been a single comment by RH or the devs,
official or otherwise.

> I have read that in the past the cache tiering hasn't been great, however has improved some what in recent releases and if setup correctly works well once tuned.
> 

Tuning the various (and often undocumented like "readforward") cache
options is one thing, having a working set of hot objects that fits into
your cache after all that tuning is the biggest question/issue. 

So if you have VMs that do operations on the same DB files over and over
again you're more likely to see success than if your VMs are writing
hundreds of GB of fresh data per day.

> However as I want to make sure what choice and hardware I put in place will last for a while / future releases, does SSD cache work / going to work with the new BlueStore?
> 
See the above thread for the "future" of cache-tiering. 

BlueStore may or may not be fast enough to not require cache-tiering for
your situation.
I see no reason why cache-tiering wouldn't work with it, it's just pools
after all, nothing to do with the storage layer.

> Or am I better off creating a SSD Pool and placing the OS Disk on this and using the standard pool as anything non OS related such as /home partitions e.t.c (bigger overhead on the configuration side per a VM)
> 
Usually the scenario here would be to have "fast" (SSD pool backed) images
for users with special needs (i.e. DBs).

A SSD pool approach has the advantage that you can be VERY specific
instead of dealing with ALL transactions and data which the cache-tier
would need to.


> Basically my questions are:
> 
> 1/ Will cache Tier continue to be supported for versions to come and in new backend format.
>
Nobody knows at this time, or more precisely is willing to speak up.
 
> 2/ I currently run at 3 replication, is it safe to run replication 2 for SSD cache while using Write Back while using DC grade SSD?
> 
I'm doing it, for both performance and cost reasons, but you'll sleep
better with a 3x replication, especially if the individual SSDs are large
and/or your network is slow (time to recovery).

Christian

> 3/ Anything I should be aware of when looking into caching?
> 
> Thanks for your time!,
> Ashley
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux