[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another thread about
it(http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/19284)

On Fri, Aug 29, 2014 at 11:01 AM, Haomai Wang <haomaiwang at gmail.com> wrote:
> Hi Roy,
>
> I already scan your merged codes about "fdcache" and "optimizing for
> lfn_find/lfn_open", could you give some performance improvement data
> about it? I fully agree with your orientation, do you have any update
> about it?
>
> As for messenger level, I have some very early works on
> it(https://github.com/yuyuyu101/ceph/tree/msg-event), it contains a
> new messenger implementation which support different event mechanism.
> It looks like at least one more week to make it work.
>
> On Fri, Aug 29, 2014 at 5:48 AM, Somnath Roy <Somnath.Roy at sandisk.com> wrote:
>> Yes, what I saw the messenger level bottleneck is still huge !
>> Hopefully RDMA messenger will resolve that and the performance gain will be significant for Read (on SSDs). For write we need to uncover the OSD bottlenecks first to take advantage of the improved upstream.
>> What I experienced that till you remove the very last bottleneck the performance improvement will not be visible and that could be confusing because you might think that the upstream improvement you did is not valid (which is not).
>>
>> Thanks & Regards
>> Somnath
>> -----Original Message-----
>> From: Andrey Korolyov [mailto:andrey at xdel.ru]
>> Sent: Thursday, August 28, 2014 12:57 PM
>> To: Somnath Roy
>> Cc: David Moreau Simard; Mark Nelson; ceph-users at lists.ceph.com
>> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
>>
>> On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy <Somnath.Roy at sandisk.com> wrote:
>>> Nope, this will not be back ported to Firefly I guess.
>>>
>>> Thanks & Regards
>>> Somnath
>>>
>>
>> Thanks for sharing this, the first thing in thought when I looked at this thread, was your patches :)
>>
>> If Giant will incorporate them, both the RDMA support and those should give a huge performance boost for RDMA-enabled Ceph backnets.
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux