Re: flashcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jan 17, 2013, at 9:48 AM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:

> 2013/1/17 Atchley, Scott <atchleyes@xxxxxxxx>:
>> IB DDR should get you close to 2 GB/s with IPoIB. I have gotten our IB QDR PCI-E Gen. 2 up to 2.8 GB/s measured via netperf with lots of tuning. Since it uses the traditional socket stack through the kernel, CPU usage will be as high (or higher if QDR) than 10GbE.
> 
> Which kind of tuning? Do you have a paper about this?

No, I followed the Mellanox tuning guide and modified their interrupt affinity scripts.

> But, actually, is possible to use ceph with IPoIB in a stable way or
> is this experimental ?

IPoIB appears as a traditional Ethernet device to Linux and can be used as such. Ceph has no idea that it is not Ethernet.

> I don't know if i support for rsocket that is experimental/untested
> and IPoIB is a stable workaroud or what else.

IPoIB is much more used and pretty stable, while rsockets is new with limited testing. That said, more people using it will help Sean improve it.

Ideally, we would like support for zero-copy and reduced CPU usage (via OS-bypass) and with more interconnects than just InfiniBand. :-)

> And is a dual controller needed on each OSD node? Ceph is able to
> handle OSD network failures? This is really important to know. It
> change the whole network topology.

I will let others answer this.

Scott--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux