Re: 6 Node cluster with 24 SSD per node: Hardwareplanning/ agreement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


Am 05.10.2016 10:48, schrieb Christian Balzer:
The switch has nothing to do IPoIB, as the name implies it's entirely
native Infiniband with IP encoded onto it.
Thus its benefits from fast CPUs.

ahh, I suggested it ... :-) but on some documents from Mellanox I thought, it has to be supported by the switch. I get a call later with a Mellanox support.

I take it that you have no real experience with Infiniband or at least
IPoIB?

100% right. Completely new for us ... but red in many Ceph forums and Proxmox that IB should be the best for the storage network.

Have you looked at other 10Gb/s switches, like Arctica/Penguin and all the
similar white boxes?

never heart ... Arctica 3200c is interesting ... but didn't find any price ... do you have a round about price?


Security doesn't really factor into this, the cluster network in Ceph is
only used for replication.
Policy maybe, but not really an issue unless you could overwhelm your
network, which we established you can't with the current design.

it is a more PCI DCC (Visa/Mastercard) thing, but as I heart .. VLAN and Firewall should be O.K

1Gb/s will be painful and have significantly higher latency than the
10Gb/s links, don't go there.

seems so, I looking for 10Gb/s only.

Thanks a lot ...

so much learned in the last hours ....


cu denny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux