Re: rbd cache on full ssd cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Christian .
I'm here  because i've done this already...tried everything that was suggested , still can't see the any improvement.

On Mar 11, 2016 2:01 AM, "Christian Balzer" <chibi@xxxxxxx> wrote:

Hello,

As alway there are many similar threads in here, googling and reading up
stuff are good for you.

On Thu, 10 Mar 2016 16:55:03 +0200 Yair Magnezi wrote:

> Hello Cephers .
>
> I wonder if anyone has some experience with full ssd cluster .
> We're testing ceph ( "firefly" ) with 4 nodes ( supermicro
>  SYS-F628R3-R72BPT ) * 1TB  SSD , total of 12 osds .
> Our network is 10 gig .
Much more, relevant details, from SW versions (kernel, OS, Ceph) and
configuration (replica size of your pool) to precise HW info.

In particular your SSDs, exact maker/version/size.
Where are your journals?

Also Firefly is EOL, Hammer and even more so the upcoming Jewel have
significant improvements with SSDs.

> We used the ceph_deploy for installation with all defaults  ( followed
> ceph documentation for integration with open-stack )
> As much as we understand there is no need to enable the rbd cache as
> we're running on full ssd.
RBD cache as in the client side librbd cache is always very helpful, fast
backing storage or not.
It can significantly reduce the number of small writes, something Ceph has
to do a lot of heavy lifting for.

> bench marking the cluster shows very poor performance write but mostly
> read ( clients are open-stack but also vmware instances ) .

Benchmarking how (exact command line for fio for example) and with what
results?
You say poor, but that might be "normal" for your situation, we can't
really tell w/o hard data.

"Poor" write performance would indicative of SSDs that are unsuitable for
Ceph.

> any input is much appreciated ( especially want to know which parameter
> is crucial for read performance in full ssd cluster )
>

read_ahead in your clients can improve things, but I guess your cluster
has more fundamental problems than this.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/028552.html


Christian
--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
http://www.gol.com/

This e-mail, as well as any attached document, may contain material which is confidential and privileged and may include trademark, copyright and other intellectual property rights that are proprietary to Kenshoo Ltd,  its subsidiaries or affiliates ("Kenshoo"). This e-mail and its attachments may be read, copied and used only by the addressee for the purpose(s) for which it was disclosed herein. If you have received it in error, please destroy the message and any attachment, and contact us immediately. If you are not the intended recipient, be aware that any review, reliance, disclosure, copying, distribution or use of the contents of this message without Kenshoo's express permission is strictly prohibited.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux