RE: Ceph Write Path Improvement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Somnath -- thanks for publishing all the data, will be great to look at it offline.  I didn't find this info:  How many RBD volumes, and what size, did you use for your mixed tests?  Was it just one RBD w/ num_jobs=1 & 10?  Also how many client systems were necessary to drive the workload on the 4 storage nodes?      

I saw the same behavior quite a while back when playing with ramdisk journal... Not a lot of improvement.

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Somnath Roy
Sent: Thursday, September 3, 2015 10:42 AM
To: Mark Nelson; Robert LeBlanc
Cc: ceph-devel
Subject: RE: Ceph Write Path Improvement

Yes, As Mark said I will collect all the data and hopefully I can present in the next performance meeting.
BTW, I have tested with Hammer code base + NvRAM journal initially, but that performance is very spiky with ~10% performance gain (at max). I thought there is no point of collecting more data with that config.
That's why I have introduced a new throttling scheme that should benefit in all the scenarios.

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:mnelson@xxxxxxxxxx]
Sent: Thursday, September 03, 2015 9:42 AM
To: Robert LeBlanc; Somnath Roy
Cc: ceph-devel
Subject: Re: Ceph Write Path Improvement

On 09/03/2015 11:23 AM, Robert LeBlanc wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Somnath,
>
> I'm having a hard time with your slide deck. Am I understanding 
> correctly that the default Hammer install was performed on SSDs with 
> co-located journals, but the optimized code was performed on the same 
> SSDs but the journal was in NVRAM? If so I'm having a hard time 
> understanding how these tests can be comparable. I really like the 
> performance gains you are seeing, but I'm trying to understand how 
> much the optimized code alone helps performance.

Hi Robert,

We talked about this a bit at the weekly performance meeting.  I think Somnath just hasn't gotten a chance to do those tests yet and is planning on doing them in the coming weeks.  I believe he started out with hammer on the SSDs and then tried to figure out how to tweak things to make the NVRAM configuration perform better.  Now he has to go back and retest the original configuration but with the new code.

Mark

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

  칻 & ~ &   +-  ݶ  w  ˛   m  ^  b  ^n r   z   h    &   G   h ( 階 ݢj"   m     z ޖ   f   h   ~ m 
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux