Re: Public network faster than cluster network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 10 May 2018 07:24:20 +0000 Gandalf Corvotempesta wrote:

> 
> > Lastly, more often than not segregated networks are not needed, add
> > unnecessary complexity and the resources spent on them would be better
> > used to have just one fast and redundant network instead.  
> 
> Biggest concern here is that I don't have enough 10Gbe ports/switches (due
> to their cost),
> thus having 4 10GbE (for a fully redundant environment) is not possible and
> our current switches
> are only with 16 10GBe ports.
> 
Without knowing what your use case is (lots of large reads or writes, or
the more typical smallish I/Os) it's hard to give specific advice. 

In general, switches that support MC-LAG (often called stacked switches,
V-LAG) are preferable, giving you 2x bandwidth _and_ redundancy.

> Probably, I can buy 2x 24 10GBe switches and use half ports for public
> network and the other half for cluster
> but this will reduce the environment to only 12 usable ports (so, 12
> "servers" at max, between hypervisors and storages)
> 
As David and I tried to point out, you don't really need separate networks.
Especially not with bonding and MC-LAG (vlag, etc) switches.

Which would give you 24 servers with up to 20Gb/s per server when both
switches are working, something that's likely to be very close to 100%
of the time.

> Our current storage servers are made with 12 slots (not all used) with SATA
> disks, they should provide 12*100MB/s = 1.2GBps/s when reading from all
> disks at once,

That's a very optimistic number, assuming journal/WAL/DB on SSDs _and_ no
concurrent write activity. 
Since you said hypervisors up there one assumes VMs on RBDs and a mixed
I/O pattern, saturating your disks with IOPS long before bandwidth becomes
an issue.

> thus, a 10GB network would be needed, right ? Maybe a dual gigabit port
> bonded together could do the job.
> A single gigabit link would be saturated by a single disk.
> 
> Is my assumption correct ?
>
The biggest argument against the 1GB/s links is the latency as mentioned.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux