RE: SNMP mangling anybody?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> If I understand your question, Several advantages:
>
> - To reduce the number of entries in SNMP ACL's in the "rest of the network".  In this example, reducing ACL entries from 3 to 1.
> - To allow insertion of additional SNMP managers without editing SNMP ACL's in the "rest of the network".  In this example,  adding two new managers host[DE] is transparent to the "rest of the network".
> - To allow relocation of SNMP managers without updating SNMP ACL's in the "rest of the network".  Example:  failover of all SNMP managers (host[ABC]) from one city to another due to a disaster.

What you describe here seems that what you actually want is ANYCAST ( in reverse ), and have all host[A-Z] use same VIP .
https://en.wikipedia.org/wiki/Anycast

An ANYCAST setup like described would allow both SNMP traps to be sent to ONLY one IP from where ever you want would end up at the closest available host[A-Z],
And all SNMP request from any of the host[A-Z] would then use the same IP as their source when sending out ( this takes additional setup for ANYCAST to allow reverse traffic )

However for the basic implementation , having this setup would normally  requiring some routing protocols ( I do not know if this is a showstopper ).
IF not I would suggest doing NAT on the RTR unit , or move NAT1 to be either a parallel router with RTR or a "router on a stick" simplifying the host[A-Z] setup .
( again not knowing your setup , I just suggest what would make for a more simple and basis design *with less plumbing* )

SIDENOTE : about your "bridge" setup I have just never seen that with "bond*" as that would typically refer to a set of physical interfaces .
A bridge would typically be linked to either BOND or a VLAN on that said BRIDGE , not that this means your setup is wrong ( just I do not understand it )

What we typically see on all system setup by "us" or delivered to "us" , we see the following 2 variations :
The CHASSIS have its own HOST OS , this OS have PHYSICAL interfaces in a BOND not visible to underlaying VMs ,
If the chassis have 2 switches with 2 ports for each SERVERBLADE (4 ports in total ) , they might set that up as 2 BONDS .
The ONE or TWO bonds come as ONE or TWO SINGLE interfaces inside the VM ( and can be bonded again if you want/need )
And this either come as a trunk or access-port ( as mentioned with VLAN connected as yet another slave on the bridge )

However on your drawing your external interfaces is the only "bridge" having data going out of the chassis , where a BOND could make sense .
So it is confusing why there is a bond also on the internal interface , on the same bond none the less .
Normally there would be an internal bridge on the CHASSIS HOST OS , which would be a separate bridge and with no access to physical interfaces 
( unless you planned for it to go to another CHASSIS )

Again , your design might be a part of a "greater plan" and I do not have the full picture 😊


Best regards
André Paulsberg-Csibi
Senior Network Engineer 
IBM Services AS

-----Original Message-----
From: FAIR, ED [mailto:ef7193@xxxxxxx] 
Sent: Tuesday, December 12, 2017 6:57 PM
To: André Paulsberg-Csibi (IBM Consultant) <Andre.Paulsberg-Csibi@xxxxxxxx>; netfilter@xxxxxxxxxxxxxxx
Subject: RE: SNMP mangling anybody?

>>> 1 . What is the "BOX" nat1 , just a plain VM ( virtual machine ) <<<

Just a plain Linux machine, 2.6.32 or later kernel, same OS as host[ABC], but with ip_forward=1 and with iptables entries for NAT:

	iptables -t nat -A POSTROUTING -o bond0.1 -j MASQUERADE
	iptables -A FORWARD -i bond0.1 -o bond0.2 -m state --state RELATED,ESTABLISHED -j ACCEPT
	iptables -A FORWARD -i bond0.2 -o bond0.1 -j ACCEPT
	# plus some filters to drop non-SNMP traffic

>>> 2. Why are your internal chassis network not just a plain BRIDGE , is there any reason for using a VLAN ? <<<

It is "plain bridge" with 802.1q VLAN.  I have never configured NAT translation using bridged interfaces, always routed.  Is this even possible?

>>> 3. Why do you not want the SNMP request with external source IP to leave the HOSTS on internal then loop around via nat1 back ?     ( if it is going out externally anyway , why not just let it go the normal shortest/fastest way ) <<<

If I understand your question, Several advantages:

- To reduce the number of entries in SNMP ACL's in the "rest of the network".  In this example, reducing ACL entries from 3 to 1.
- To allow insertion of additional SNMP managers without editing SNMP ACL's in the "rest of the network".  In this example,  adding two new managers host[DE] is transparent to the "rest of the network".
- To allow relocation of SNMP managers without updating SNMP ACL's in the "rest of the network".  Example:  failover of all SNMP managers (host[ABC]) from one city to another due to a disaster.

In the case of just three managers (host[ABC]) the advantages are not so great, but in the case of, say, 26  managers (host[A-Z]) the advantage becomes significant.  In reality, the scale will be 5-20 managers per NAT, perhaps greater if the conntrack performance is acceptable.  


��.n��������+%������w��{.n����z��׫�)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥




[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Netem]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux