Re: Extra RAM to improve OSD write performance ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

As Somnath writes below, RAM will only indirectly benefit writes. 
But with the right tuning to keep dentry and other FS related caches in
the SLAB it can help a lot.
As will all the really hot objects that get read frequently and still fit
in the pagecache of your storage nodes, as another read access to the disk
was avoided, leaving all the IOPS for your writes.

However you have to realize that these are fake IOPS, as in when you're
cluster gets busy, changes workloads, runs out of memory to hold all those
entries and objects you're back to what the backing storage of your OSDs
can provide performance wise.

If your cluster is write-heavy and light on reads, that's a perfect
example, both for the benefits and caveats.
Basically once you find that deep-scrubs severely impact your cluster
performance (having to reach EACH object on disk, not just the hot ones
and thus making your disks seek/thrash), it is time to increase I/O
capacity, usually by more OSDs.

Regards,

Christian

On Sun, 14 Feb 2016 17:24:37 +0000 Somnath Roy wrote:

> I doubt it will do much good in case of 100% write workload. You can
> tweak your VM dirty ration stuff to help the buffered write but the down
> side is the more amount of data it has to sync (while dumping dirty
> buffer eventually) the more spikiness it will induce..The write behavior
> won’t be smooth and gain won’t be much (or not at all). But, Ceph does
> xattr reads in the write path, if you have very huge workload this extra
> RAM will help you to hold dentry caches in the memory (or go for
> sappiness setting not to swap out dentry caches) and effectively will
> save some disk hit. Also, in case of mixed read/write scenario this
> should help as some read could be benefitting from this. All depends on
> how random and how big is your workload..
> 
> 
> Thanks & Regards
> Somnath
> 
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Vickey Singh Sent: Sunday, February 14, 2016 1:55 AM
> To: ceph-users@xxxxxxxxxxxxxx; ceph-users
> Subject:  Extra RAM to improve OSD write performance ?
> 
> Hello Community
> 
> Happy Valentines Day ;-)
> 
> I need some advice on using EXATA RAM on my OSD servers to improve
> Ceph's write performance.
> 
> I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so
> assuming cluster is not recovering, most of the time system will have at
> least ~150GB RAM free. And for 20 machines its a lot ~3.0 TB RAM
> 
> Is there any way to use this free RAM to improve write performance of
> cluster. Something like Linux page cache for OSD write operations.
> 
> I assume that by default Linux page cache can use free memory to improve
> OSD read performance ( please correct me if i am wrong). But how about
> OSD write improvement , How to improve that with free RAM.
> 
> PS : My Ceph cluster's workload is just OpenStack Cinder , Glance , Nova
> for instance disk
> 
> - Vickey -
> 
> 
> 
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named above.
> If the reader of this message is not the intended recipient, you are
> hereby notified that you have received this message in error and that
> any review, dissemination, distribution, or copying of this message is
> strictly prohibited. If you have received this communication in error,
> please notify the sender by telephone or e-mail (as shown above)
> immediately and destroy any and all copies of this message in your
> possession (whether hard copies or electronically stored copies).


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux