Re: 25G RDMA networking thoughts???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 9, 2016 at 5:51 AM, LIU, Fei <james.liu@xxxxxxxxxxxxxxx> wrote:
> Hi Sage,
>    thanks,Sorry for confusing you.  What i am trying to say is async with
> RDMA over ethernet.  I understand that async messenger support TCP based
> ethernet well.
> Secondly, would be possible to have data traffice among osds categorized
> into different service level to better provide QoS for the whole Ceph
> cluster service if we can within the 25G RDMA faclitiis?
> Thirdly, would be possible to provide unified network for both storage and
> compute within QoS control? We don't expect the replication/recovery/refill
> to have bad impact to the latency of application.

Currently I only use rdma over ib to test. I don't have eth over ib
nic by hand. I don't know what need to do for eth rdma nic...

>
>
>    Regards,
>    James
>
> ------------------------------------------------------------------
> From:Sage Weil <sweil@xxxxxxxxxx>
> Time:2016 Nov 8 (Tue) 13:45
> To:James <james.liu@xxxxxxxxxxxxxxx>
> Cc:Haomai Wang <haomai@xxxxxxxx>; ceph-devel <ceph-devel@xxxxxxxxxxxxxxx>
> Subject:Re: 25G RDMA networking thoughts???
>
> On Wed, 9 Nov 2016, LIU, Fei wrote:
>> Hi Sage,
>>    Yes, Totally understood.  25G RDMA network for Ceph cluster is built
>> for
>> interal test. Xio messenger and Async messenger(But it can only support
>> infiniband,right) are going to be two options. We are carefully evaluate
>> both of these two options. But the most important goal in the end is to
>> see
>> how bluestore works with rdma to bring down the total latency for workload
>> like OLTP.
>>
>> Hi Haomai,
>>    Would you mind let us know when the async messenger is going to support
>> ethernet if not support yet?
>
> The default async backend is PosixStack which is all TCP-based.  (And
> async is now the default messenger in kraken.)
>
> sage
>
>>
>>    Regards,
>>    James
>>       ------------------------------------------------------------------
>> From:Sage Weil <sweil@xxxxxxxxxx>
>> Time:2016 Nov 8 (Tue) 13:19
>> To:James <james.liu@xxxxxxxxxxxxxxx>
>> Subject:Re: 25G RDMA networking thoughts???
>>
>> [adding ceph-devel]
>>
>> On Wed, 9 Nov 2016, LIU, Fei wrote:
>> > Hi Sage,
>> >    I was wondering do you have any thoughts of 25G RDMA networking
>> > construction besides of xio-messenger/async?  Is there any guidance to
>> > bu
>> ild
>> > 25G RDMA netowrk to better control the whole Ceph cluster latency?
>>
>> The only RDMA options right now are XioMessenger and AsyncMessenger's new
>> RDMA backend. Both are experimental, but we'd be very interested in
>> hearing about your experience.
>>
>> I wouldn't assume that latency is network-related, though.  More often
>> than not we're finding it's the OSD backend or the OSD request internals
>> (e.g., request scheduling or peering) that's the culprit...
>>
>> sage
>>
>>
>>
>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux