Re: Adding cluster network to running cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 06/07/2018 10:56 AM, Kevin Olbrich wrote:
> Realy?
> 
> I always thought that splitting the replication network is best practice.
> Keeping everything in the same IPv6 network is much easier.
> 

No, there is no big benefit unless your usecase (which 99% isn't) asks
for it.

Keep it simple, one network to run the cluster on. Less components which
can fail or complicate things.

Wido

> Thank you.
> 
> Kevin
> 
> 2018-06-07 10:44 GMT+02:00 Wido den Hollander <wido@xxxxxxxx
> <mailto:wido@xxxxxxxx>>:
> 
> 
> 
>     On 06/07/2018 09:46 AM, Kevin Olbrich wrote:
>     > Hi!
>     > 
>     > When we installed our new luminous cluster, we had issues with the
>     > cluster network (setup of mon's failed).
>     > We moved on with a single network setup.
>     > 
>     > Now I would like to set the cluster network again but the cluster is in
>     > use (4 nodes, 2 pools, VMs).
> 
>     Why? What is the benefit from having the cluster network? Back in the
>     old days when 10Gb was expensive you would run public on 1G and cluster
>     on 10G.
> 
>     Now with 2x10Gb going into each machine, why still bother with managing
>     two networks?
> 
>     I really do not see the benefit.
> 
>     I manage multiple 1000 ~ 2500 OSD clusters all running with all their
>     nodes on IPv6 and 2x10Gb in a single network. That works just fine.
> 
>     Try to keep the network simple and do not overcomplicate it.
> 
>     Wido
> 
>     > What happens if I set the cluster network on one of the nodes and reboot
>     > (maintenance, updates, etc.)?
>     > Will the node use both networks as the other three nodes are not
>     > reachable there?
>     > 
>     > Both the MONs and OSDs have IPs in both networks, routing is not needed.
>     > This cluster is dualstack but we set ms_bind_ipv6 = true.
>     > 
>     > Thank you.
>     > 
>     > Kind regards
>     > Kevin
>     > 
>     > 
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>     >
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux