On 01/29/2018 04:11 AM, Matthias Walther wrote:
Hello,
Hi Matthias,
Yeah, they'd never shut you down here. Lawyers will just send you one expensive letter after another . But that's another topic. Things are almost resolved, just some trials needed to proove the new laws.
It sounds like the situation is improving.
The GRE-Tunnels are always online between VM on the internet. Never in my private LAN.
Thank you for the confirmation.
Exactly, from the very left to the very right. From one VM to another. Some hosts aren't virtualized, but that doesn't make a difference.
*nod*
It is not needed. We could book another public IP, assign it to the VM and request a tunneld endpoint change to the new IP. But I'd like to understand, how to diagnose those kinds of problems and what worked once flawlessly, should still work.
Fair enough.
Nothing in dmesg, nothing in syslog.
:-/It seems as if something is intercepting the packets. - I doubt that it's the NAT module, but I can't rule it out.
There is nothing there. I ran the ping at hundret packages per second to have enough packets to find between the hundrets of packets going through here.
Wait. tcpdump shows that packets are entering one network interface but they aren't leaving another network interface?
That sounds like something is filtering the packets.I assume that packet forwarding is enabled for the interface(s) in question, correct?
There were just the two unnatted ICMP request packages, we've seen before, followed by the next two with the next sequential number.
I assume that you're talking about the packets entering the inside interface. Or is one of the two that you're talking about possibly the same packet leaving the outside interface, without NAT having been applied?
This is why I like to sniff on specific interfaces. Purportedly PCAP-NG has the ability to record interface names / numbers, but I've never needed it or messed with it.
Nothing inbetween. Nothing that could be a wrongly natted or broken package.
:-/
No, we have very strict network neutrality. We filter absolutely nothing, no traffic shaping, no blocked ports.
Okay. I'm used to filtering things like NetBIOS and SMTP from end user prefixes.
I tried this one. The broken tunnels are marked with „UNREPLIED“. Well, that sounds reasonable, as there's nothing coming back.
I feel like the kicker is that the traffic is never making it out of the local system to the far side. As such the far side never gets anything, much less replies.
Can you do some checking on the far side to see if it's receiving the requests? I suspect that it is not.
root@unimatrixzero ~ # conntrack -L|grep gre conntrack v1.4.3 (conntrack-tools): 97 flow entries have been shown.3:gre 47 176 src=185.66.193.1 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.193.1 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 9:gre 47 29 src=185.66.194.1 dst=176.9.38.150 srckey=0x0 dstkey=0x0 [UNREPLIED] src=176.9.38.150 dst=185.66.194.1 srckey=0x0 dstkey=0x0 mark=0 use=1 14:gre 47 29 src=185.66.194.0 dst=176.9.38.150 srckey=0x0 dstkey=0x0 [UNREPLIED] src=176.9.38.150 dst=185.66.194.0 srckey=0x0 dstkey=0x0 mark=0 use=1 29:gre 47 179 src=46.4.80.131 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=46.4.80.131 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 30:gre 47 177 src=185.66.193.0 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.193.0 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 40:gre 47 29 src=185.66.195.0 dst=176.9.38.150 srckey=0x0 dstkey=0x0 [UNREPLIED] src=176.9.38.150 dst=185.66.195.0 srckey=0x0 dstkey=0x0 mark=0 use=1 60:gre 47 26 src=88.198.51.94 dst=176.9.38.150 srckey=0x0 dstkey=0x0 [UNREPLIED] src=176.9.38.150 dst=88.198.51.94 srckey=0x0 dstkey=0x0 mark=0 use=1 62:gre 47 179 src=185.66.194.0 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.194.0 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 69:gre 47 177 src=185.66.193.1 dst=176.9.38.150 srckey=0x0 dstkey=0x0 src=192.168.10.62 dst=185.66.193.1 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 74:gre 47 174 src=192.168.10.62 dst=185.66.193.0 srckey=0x0 dstkey=0x0 src=185.66.193.0 dst=176.9.38.150 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 75:gre 47 169 src=185.66.194.1 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.194.1 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 80:gre 47 179 src=176.9.38.158 dst=176.9.38.156 srckey=0x0 dstkey=0x0 src=176.9.38.156 dst=176.9.38.158 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 82:gre 47 179 src=185.66.195.1 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.195.1 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 85:gre 47 179 src=46.4.80.131 dst=176.9.38.156 srckey=0x0 dstkey=0x0 src=176.9.38.156 dst=46.4.80.131 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1 91:gre 47 179 src=185.66.195.0 dst=176.9.38.158 srckey=0x0 dstkey=0x0 src=176.9.38.158 dst=185.66.195.0 srckey=0x0 dstkey=0x0 [ASSURED] mark=0 use=1
Ya, the [UNREPLIED] bothers me. As does the fact that you aren't seeing the traffic leaving the host's external interface.
Do you have any conntrack tricks to look into this further?
I'd look more into the TRACE option (target) that you seem to have enabled in the raw table. That should give you more information about the packets flowing through the kernel.
My hunch is that the packets aren't making it out onto the wire for some reason. Thus the lack of reply.
Maybe we should start looking into the code, the package goes through. Are you familiar with that part of the kernel? So far I only found that one function, I copied a few days earlier.
No, I am not.I'll see if I can't throw together a PoC in Network namespaces this evening to evaluate if NATing GRE works. - I'd like to test NATing different sets of endpoints (1:1) and NATing multiple remote endpoints to one local endpoint (many:1).
Maybe the „standard“ nat code fails here, because GRE is less than an UDP package. Or because it's stateless. Just a wild guess.
I have no idea.I work with the tools that others build, like Lego bricks, putting them together in new and interesting ways. - I don't have the skills to create the bricks themselves.
Maybe it has something to do with weather the first package in this tunnel is incoming or outgoing. This is random, because you never know, which of the two BGP daemons try to get a connection first.
You might be onto something about the first packet. At least as far as what connection tracking sees.
Bye,
:-) -- Grant. . . . unix || die
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature