Re: Public network faster than cluster network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il giorno gio 10 mag 2018 alle ore 02:30 Christian Balzer <chibi@xxxxxxx>
ha scritto:
> This cosmic imbalance would clearly lead to the end of the universe.
> Seriously, think it through, what do you _think_ will happen?

I thought what David told:

"For a write on a replicated pool with size 3 the client writes to the
primary osd across the public network and then the primary osd sends the
other 2 copies across the cluster network to the secondary OSDs. So for
writes the public network uses N bandwidth while the cluster use 2N
bandwidth for the replica copies. Seeing as the write isn't acknowledged
until all 3 copies are written it makes no sense to have a faster public
network"

This is exactly what I've imagined

> Lastly, more often than not segregated networks are not needed, add
> unnecessary complexity and the resources spent on them would be better
> used to have just one fast and redundant network instead.

Biggest concern here is that I don't have enough 10Gbe ports/switches (due
to their cost),
thus having 4 10GbE (for a fully redundant environment) is not possible and
our current switches
are only with 16 10GBe ports.

Probably, I can buy 2x 24 10GBe switches and use half ports for public
network and the other half for cluster
but this will reduce the environment to only 12 usable ports (so, 12
"servers" at max, between hypervisors and storages)

Our current storage servers are made with 12 slots (not all used) with SATA
disks, they should provide 12*100MB/s = 1.2GBps/s when reading from all
disks at once,
thus, a 10GB network would be needed, right ? Maybe a dual gigabit port
bonded together could do the job.
A single gigabit link would be saturated by a single disk.

Is my assumption correct ?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux