Re: bad nat connection tracking performance with ip_gre

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick McHardy wrote:
Timo Teräs wrote:
LOCALLY GENERATED PACKET, hogs CPU
----------------------------------

IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344
TOS=0x00 PREC=0x00 TTL=8 ID=41664 DF PROTO=UDP SPT=47920
DPT=1234 LEN=1324 UID=1007 GID=1007
    1. raw:OUTPUT
    2. mangle:OUTPUT
    3. filter:OUTPUT
    4. mangle:POSTROUTING


Please include the complete output, I need to see the devices logged
at each hook.

The devices are identical for each hook grouped under same line.

Here are the interesting lines from one packet:

Generation:

raw:OUTPUT:policy:2 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007 mangle:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
(the nat hook is called for initial packet only):
nat:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36593 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007 filter:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007 mangle:POSTROUTING:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 mangle:POSTROUTING:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
Looped back by multicast routing:

raw:PREROUTING:policy:1 IN=eth1 OUT= MAC= SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 mangle:PREROUTING:policy:1 IN=eth1 OUT= MAC= SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
The cpu hogging happens somewhere below this, since the more
multicast destinations I have the more CPU it takes.

Multicast forwarded (I hacked this into the code; but similar
dump happens on local sendto()):

Actually, now that I think, here we should have the inner IP
contents, and not the incomplete outer yet. So apparently
the ipgre_header() messes the network_header position.

mangle:FORWARD:policy:1 IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip LEN=0 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 filter:FORWARD:rule:2 IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip LEN=0 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
ip_gre xmit sends out:

raw:OUTPUT:rule:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 raw:OUTPUT:policy:2 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 mangle:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
(nat hook for initial packets)
nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
- Timo
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux