Re: autofs reverts to IPv4 for multi-homed IPv6 server ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I should add that routers usually do not carry routes for ULAs in the
first place, because every individual can generate his own addresses out
of fc00::/7 and there is no central registration authority or even a 
requirement. Our packets would in the worst case vanish at the DFN [1] 
exchange nodes to the internet if they ever would leak out of the 
universities network. And there is no route into the universities
network for any ULA. So it is IMHO a bit better than pure security by obscurity.

[1] https://en.wikipedia.org/wiki/Deutsches_Forschungsnetz

On Fri, Apr 29, 2016 at 04:10:44PM +0200, Christof Koehler wrote:
> Hello,
> 
> > 
> > Would that approach help with what you're trying to achieve?
> > 
> 
> I am not sure of anything right now anymore after noticing what mount 
> does. On top, I am not sure if I understand what you are proposing :-)
> 
> So, please allow me to write down what my thinking was and what I 
> thought I needed instead of answering straight away. I will try to be
> brief about it. Maybe you have a different perspective on what I am 
> trying to do and can point out if it is unreasonable or if it is
> something with can/should be solved on mounts or autofs's level at all
> or not at all.
> 
> Independent of that maybe a) fixing the situation where
> autofs/mount falls back to IPv4, which I understand is a bug, and b)
> having the possibility to pass IPv6 addresses as a result of an
> exectutable map lookup (as is possible with IPv4 adresses) is what I really 
> need.  I assume these two might be easier to do ? If I can pass IPv6 addresses 
> from the exectuable map I can shell script what I think I need myselves. Of
> course I still have to check if passing IPv6 is actually not possible as
> I speculated earlier.
> 
> But please read on keeping in mind that the original observation which started 
> this was my surprise discovering autofs/mount suddenly falling back to IPv4 while
> I was still naively assuming IPv6 would simply work at that time.
> 
> As you know it is completely normal with IPv6 that a machine (server or client) 
> has several IP adresses: link local fe80:: (always there, I will ignore
> it), one (or more) statically assigned 2001:: GUAs (-> DNS AAAA), dynamically 
> assigned GUA/derived GUA privacy address, fd5f:: ULA (should not be in public 
> visible DNS and should not get routed beyond the organizations boundary) and on 
> top of that one IPv4 address.
> 
> In the first (out of the box) setup we had (GUA only) mount/autofs (and 
> everything else like ssh) were happiliy using the privacy address with limited 
> lifetime to connect to (NFS) servers, both workstations and dedicted 
> fileservers. This strikes me as problematic for several reasons:
> 1. The privacy address is supposed to change after some time (old
>    becomes deprecated), so I cannot easily identify the client on the server
> 2. I have to NFS export unconditionally to at least a whole /64. I like to
>    export on a per client basis, either hostname or IP; but see [1]
> 3. If the lifetime of the privacy address ends it becomes deprecated and
>    (I did not test that) NFS requests my then suddenly arrive from the
>    current privacy address while the mount was made via a no longer
>    existing (or at least deprecated) one ? Not sure, but I would like to avoid 
>    situations like that from the beginning.
> 
> Manually adding the two addrlabels mentioned in my previous mail makes sure 
> that the clients will use their statically assigned GUAs to connect to the
> servers if using mount or autofs with only a single IPv6 GUA entry in the
> private DNS.
> 
> Still, I was not completely at ease with using GUAs like this:
> 1. You have to make sure/be sure the manual addrlabel is always there
>    and you might forget that there was a modification to defaults at
>    inconvenient moments ("principle of least undocumented change" ;-)
> 2. The NFS servers/clients are on a GUAs and might leak in theory traffic 
>    all over the internet. In our situation we have to use VRF routing on the
>    Universities Cisco 6500 Routers, one typo and we get world
>    accessible. Of course there a firewalls and ip6table rules on the
>    servers themselves. Also client traffic might get misdirected and leak out 
>    on a GUA. On top,  rpc listening on every address it can find and the 
>    kitchen sink is a little problem anyway.  See also [1].
> 3. The NFS servers share a /64 with random laptops; we (i.e. "me" ) could put
>    different VLANs on different wall outlets, but in practice with the
>    way people (scientists) behave ...
> 
> In theory using ULAs instead of GUAs for NFS sounds like a nice thing
> then. 
> 
> Favouring ULA over GUA if possible is the RFC's default, so no manual
> addrlabel required.
> Internal traffic would use ULAs (which all routers here blackhole) and
> therefore stays internal. Outside DNS queries would not resolve ULAs
> anyway. Only traffic directed outside goes outside using the 
> approrpiate GUAs. There is a weak separation from the laptops, they can
> still ssh in via the static GUA assigned to every server (workstation),
> but I can restrict NFS exports to the known ULAs easily. 
> On top, in the unlikely event that we ever have to
> change GUAs ("renumbering" in IPv6 terms) the ULAs would stay stable.
> 
> Only: neither mount (which I just discovered now) nor autofs take the ULA 
> vs. GUA preference or the possibility that not all addresses might be
> equal into account as I initially assumed, with autofs eventually even falling 
> back to IPv4 due to the bug you mentioned. So this is where my idea to
> use ULAs clearly does not work. I am no longer sure it should work,
> anyway.  Also, as you can see I could work with GUAs only, but someone else 
> might stumble upon the same situation later if IPv6 ever gets real widespread.
> 
> 
> Thank you very much for reading all this !
> 
> 
> Best Regards
> 
> Christof
> 
> [1] I am aware that with NFS4 the solution is of course to use Kerberos
> security. However, currently the old cluster (Ubuntu 10.04 with hand
> rolled kernels, drivers and OFED stack; tcp6 transport for NFS is only
> available on 10.10 or later) is using the same servers and when a
> Kerberos Ticket runs out while a calculation is running (think 10 day
> jobs) you have a problem. Also queuing systems (Toreque/Maui, SLURM) are not 
> really able to take care of Kerberos for the user. This situation will change 
> with the new cluster which is completely separated.  Then I will think about 
> moving to Kerberos again.
> 
> 
> -- 
> Dr. rer. nat. Christof Köhler       email: c.koehler@xxxxxxxxxxxxxxxxxxx
> Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
> 28359 Bremen  
> 
> PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
> --
> To unsubscribe from this list: send the line "unsubscribe autofs" in

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler@xxxxxxxxxxxxxxxxxxx
Universitaet Bremen/ BCCMS          phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.12       fax: +49-(0)421-218-62770
28359 Bremen  

PGP: http://www.bccms.uni-bremen.de/cms/people/c_koehler/
--
To unsubscribe from this list: send the line "unsubscribe autofs" in



[Index of Archives]     [Linux Filesystem Development]     [Linux Ext4]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux