On Wed, Dec 04, 2019 at 03:42:00PM +0000, Serguei Bezverkhi (sbezverk) wrote: > Hi Phil, > > I can also minimize any impact by inserting a new rule in front of the old one, and then delete the old one. So in this case there should no any impact. Here is iptables rules I try to mimic: Yes, that's more or less equivalent to doing it in a single transaction. > // -A KUBE-SVC-57XVOCFNTLTR3Q27 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-FS3FUULGZPVD4VYB > // -A KUBE-SVC-57XVOCFNTLTR3Q27 -j KUBE-SEP-MMFZROQSLQ3DKOQA > // ! > // ! Endpoint 1 for KUBE-SVC-57XVOCFNTLTR3Q27 > // ! > // -A KUBE-SEP-FS3FUULGZPVD4VYB -s 57.112.0.247/32 -j KUBE-MARK-MASQ > // -A KUBE-SEP-FS3FUULGZPVD4VYB -p tcp -m tcp -j DNAT --to-destination 57.112.0.247:8080 > // ! > // ! Endpoint 2 for KUBE-SVC-57XVOCFNTLTR3Q27 > // ! > // -A KUBE-SEP-MMFZROQSLQ3DKOQA -s 57.112.0.248/32 -j KUBE-MARK-MASQ > // -A KUBE-SEP-MMFZROQSLQ3DKOQA -p tcp -m tcp -j DNAT --to-destination 57.112.0.248:8080 > > As you can see SVC chain KUBE-SVC-57XVOCFNTLTR3Q27 load balance between 2 endpoints. OK, static load-balancing between two services - no big deal. :) What happens if config changes? I.e., if one of the endpoints goes down or a third one is added? (That's the thing we're discussing right now, aren't we?) Cheers, Phil