Re: Extra RAM use as Read Cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vickey,

What are you using for the clients to access the Ceph Cluster ie Kernel mounted RBD, KVM VM's, CephFS? And as Somnath Roy touched on, what sort of IO pattern are you generating? Also if you can specify the type hardware and configuration you are running, that will also help.

You said you can get 2.5GBps on Lustre so I'm assuming this is sequential reads? Please see the recent thread in the Ceph Dev list regarding readahead being broken in recent kernels. 

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Somnath Roy
> Sent: 07 September 2015 23:34
> To: Vickey Singh <vickey.singh22693@xxxxxxxxx>; ceph-
> users@xxxxxxxxxxxxxx
> Subject: Re:  Extra RAM use as Read Cache
> 
> Vickey,
> OSDs are on top of filesystem and those unused memory will be
> automatically part of paged cache by filesystem.
> But, the read performance improvement depends on the pattern application
> is reading data and the size of working set.
> Sequential pattern will benefit most (you may need to tweak
> read_ahead_kb to bigger values). For random workload also you will get
> benefit if the working set is not too big. For example, a LUN of say 1 TB and
> aggregated OSD page cache of say of 200GB will benefit more than a LUN of
> say 100TB with the similar amount of page cache (considering a true random
> pattern).
> 
> Thanks & Regards
> Somnath
> 
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Vickey Singh
> Sent: Monday, September 07, 2015 2:19 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  Extra RAM use as Read Cache
> 
> Hello Experts ,
> 
> I want to increase my Ceph cluster's read performance.
> 
> I have several OSD nodes having 196G RAM. On my OSD nodes Ceph just
> uses 15-20 GB of RAM.
> 
> So, can i instruct Ceph to make use of the remaining 150GB+ RAM as read
> cache. So that it should cache data in RAM and server to clients very fast.
> 
> I hope if this can be done, i can get a good read performance boost.
> 
> 
> By the way we have a LUSTRE cluster , that uses extra RAM as read cache and
> we can get upto 2.5GBps read performance.  I am looking someone to do
> with Ceph.
> 
> - Vickey -
> 
> 
> 
> 
> 
> ________________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If the
> reader of this message is not the intended recipient, you are hereby notified
> that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly prohibited. If
> you have received this communication in error, please notify the sender by
> telephone or e-mail (as shown above) immediately and destroy any and all
> copies of this message in your possession (whether hard copies or
> electronically stored copies).




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux