Re: Ceph scale testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Somnath,

I remember Xinxin@Intel could get 150K + RW iops for 2X replication,
with DCS3700 as OSD and P3700 as Journal on a 4 nodes Haswell
cluster(based on infernalis + tuning + jemalloc + rocksdb). And seems
your patch doesnt works well in this environment.

Maybe I am wrong but I kind to feel that the patch seems doesn't work
with all vendor's SSD, or maybe work better with capacity-wise ssd?

Xiaoxi





2016-01-22 1:26 GMT+08:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>:
> Haomai,
> This is not improved performance from stock jewel but from stock Hammer.
> Not 100% because of my changes, but *mostly* I guess at least in my environment with SAS SSDs..There are some improvements went in post hammer that improved performance (20% or so) as well..
>
> Good to see IF100 deploying widely :-)
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Haomai Wang [mailto:haomaiwang@xxxxxxxxx]
> Sent: Thursday, January 21, 2016 1:21 AM
> To: Somnath Roy
> Cc: Jens Rosenboom; ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re: Ceph scale testing
>
> On Thu, Jan 21, 2016 at 4:20 PM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
>> All the numbers other than the last write numbers are from Hammer...Last one with jewel + an outstanding write path patch..
>> We used jemalloc based hammer + we have some tuning specific to our flash and environment..
>
> So you mean that the improved performance from Jewel is caused by filestore write path optimization(as listed in pr)?
>
> BTW, I just see your IF100 in customer env, congratulations!
>
> http://www.tiikoni.com/tis/view/?id=dcb1d53
>
>>
>> -----Original Message-----
>> From: Jens Rosenboom [mailto:j.rosenboom@xxxxxxxx]
>> Sent: Thursday, January 21, 2016 12:13 AM
>> To: Somnath Roy
>> Cc: ceph-devel@xxxxxxxxxxxxxxx
>> Subject: Re: Ceph scale testing
>>
>> 2016-01-21 5:03 GMT+01:00 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>:
>>> Hi,
>>> Here is the copy of the ppt I presented in today's performance meeting..
>>>
>>> https://docs.google.com/presentation/d/1j4Lcb9fx0OY7eQlQ_iUI6TPVJ6t_o
>>> r
>>> ZWKJyhz0S_3ic/edit?usp=sharing
>>
>> Wow, these number look pretty impressive. Did you use some additional patches in your Jewel build? In my own testing I'm seeing only marginal improvements in plain Jewel vs. Hammer.
>>
>> To have a fair comparison, you should also use the same QD for both Hammer and Jewel.
>>
>> Finally, the avg latency for 4k ops isn't that significant, could you also add numbers for max latency or 99.xx percentile?
>
> --
> Best Regards,
>
> Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux