Re: load balancing between two chains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI Phil,

There is no loadblancer, curl is executed from the actual node with both pods, so all traffic is local to the node.

As per your suggestion I modified nfproxy rules:

        chain k8s-nfproxy-svc-M53CN2XYVUHRQ7UB {
                numgen random mod 2 vmap { 0 : goto k8s-nfproxy-sep-I7XZOUOVPIQW4IXA, 1 : goto k8s-nfproxy-sep-ZNSGEJWUBCC5QYMQ }
                counter packets 3 bytes 180 comment ""
        }

        chain k8s-nfproxy-sep-ZNSGEJWUBCC5QYMQ {
                counter packets 0 bytes 0 comment ""
                ip saddr 57.112.0.38 meta mark set 0x00004000 comment ""
                dnat to 57.112.0.38:8080 fully-random
        }

        chain k8s-nfproxy-sep-I7XZOUOVPIQW4IXA {
                counter packets 1 bytes 60 comment ""
                ip saddr 57.112.0.36 meta mark set 0x00004000 comment ""
                dnat to 57.112.0.36:8989 fully-random
        }

I could not find file /proc/net/nf_conntrac but I do see nf_conntrack module is loaded:
nf_conntrack          131072  12 xt_conntrack,nf_nat,nft_ct,nft_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,nft_masq,nft_masq_ipv4,ip_vs

tcpdump in the pod 1 does not see any curl's generated packets, but in pod 2 it does. 

I noticed one hofully useful fact,  it is always endpoint associated with 1st chain in numgen rule works and 2nd does not.

Anything else I could try to collect to understand why this rule does not work as intended?

Thank you very much for your help
Serguei

On 2020-01-20, 6:23 AM, "Phil Sutter" <n0-1@xxxxxxxxxxxxx on behalf of phil@xxxxxx> wrote:

    Hi Serguei,
    
    On Sun, Jan 19, 2020 at 09:46:11PM -0500, sbezverk wrote:
    > While doing some performance test, btw the results are awesome so far, I came across an issue. It is kubernetes environment, there is a Cluster scope service with 2 backends, 2 pods. The rule for this service program a load balancing between 2 chains representing each backend pod.  When I curl the service, only 1 backend pod replies, second times out. If I delete pod which was working, then second pod starts replying to curl requests. Here are some logs and packets captures. Appreciate if you could take a look at it and share your thoughts.
    
    Please add counters to your rules to check if both dnat statements are
    hit. You may also switch 'jump' in vmap to 'goto' and add a final rule
    in k8s-nfproxy-svc-M53CN2XYVUHRQ7UB (which should never see packets).
    
    Did you provide a dump of traffic between load-balancer and pod2? (No
    traffic is relevant info, too!) A dump of /proc/net/nf_conntrack in
    error situation might reveal something, too.
    
    Cheers, Phil
    






[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux