Re: flashcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 17, 2013 at 7:47 PM, Atchley, Scott <atchleyes@xxxxxxxx> wrote:
> On Jan 17, 2013, at 10:07 AM, Andrey Korolyov <andrey@xxxxxxx> wrote:
>
>> On Thu, Jan 17, 2013 at 7:00 PM, Atchley, Scott <atchleyes@xxxxxxxx> wrote:
>>> On Jan 17, 2013, at 9:48 AM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:
>>>
>>>> 2013/1/17 Atchley, Scott <atchleyes@xxxxxxxx>:
>>>>> IB DDR should get you close to 2 GB/s with IPoIB. I have gotten our IB QDR PCI-E Gen. 2 up to 2.8 GB/s measured via netperf with lots of tuning. Since it uses the traditional socket stack through the kernel, CPU usage will be as high (or higher if QDR) than 10GbE.
>>>>
>>>> Which kind of tuning? Do you have a paper about this?
>>>
>>> No, I followed the Mellanox tuning guide and modified their interrupt affinity scripts.
>>
>> Did you tried to bind interrupts only to core to which QPI link
>> belongs in reality and measure difference with spread-over-all-cores
>> binding?
>
> This is the modified part. I bound the mlx4-async handler to core 0 and the mlx4-ib-1-0 handle to core 1 for our machines.
>
>>>> But, actually, is possible to use ceph with IPoIB in a stable way or
>>>> is this experimental ?
>>>
>>> IPoIB appears as a traditional Ethernet device to Linux and can be used as such.
>>
>> Not exactly, this summer kernel added additional driver for fully
>> featured L2(ib ethernet driver), before that it was quite painful to
>> do any possible failover using ipoib.
>
> I assume it is now an EoIB driver. Does it replace the IPoIB driver?
>
Nope, it is upper-layer thing: https://lwn.net/Articles/509448/

>>>> I don't know if i support for rsocket that is experimental/untested
>>>> and IPoIB is a stable workaroud or what else.
>>>
>>> IPoIB is much more used and pretty stable, while rsockets is new with limited testing. That said, more people using it will help Sean improve it.
>>>
>>> Ideally, we would like support for zero-copy and reduced CPU usage (via OS-bypass) and with more interconnects than just InfiniBand. :-)
>>>
>>>> And is a dual controller needed on each OSD node? Ceph is able to
>>>> handle OSD network failures? This is really important to know. It
>>>> change the whole network topology.
>>>
>>> I will let others answer this.
>>>
>>> Scott--
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux