Re: Regression after changes for mounts from IPv6 addresses introducing delays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ian,

On Mon, Jul 10, 2023 at 02:17:03PM +0800, Ian Kent wrote:
> 
> On 10/7/23 14:09, Salvatore Bonaccorso wrote:
> > Hi Ian,
> > 
> > On Mon, Jul 10, 2023 at 10:53:31AM +0800, Ian Kent wrote:
> > > On 9/7/23 22:57, Salvatore Bonaccorso wrote:
> > > > Hi
> > > > 
> > > > The following sort of regression was noticed while updating a client
> > > > running Debian buster (with autofs 5.1.2 based) to Debian bullseye
> > > > (5.1.7 based autofs), but verified it's still present with 5.1.8. The
> > > > folloing setup is present:
> > > > 
> > > > There is a NFS server, dualstacked, with both public IPv4 and IPv6
> > > > addresses resolvable in DNS. As I cannot put the public IPs here in
> > > > the report, let's assume It is called nfs-server with addresses
> > > > 192.168.122.188 and fc00:192:168:122::188.
> > > I assume the IPv6 address here is not what's used in practice. It
> > > 
> > > doesn't look valid, it doesn't look like an IPv4 mapped address, what
> > > 
> > > is it, how was it constructed?
> > I'm sorry this was just me trying to use something valid from
> > https://en.wikipedia.org/wiki/Unique_local_address . Yes this is nto
> > the IPv6 address which the server has in practice.
> 
> Yes, it's been hard over time given the available IPv6 support has been
> 
> poor and setting up locally has always been a problem for me.
> 
> 
> But, as I say, my ISP is there now so I should be good to go.

I think I finally have some further glue: While the host has IPv4 only
it is in a network which from the networking team allows in principle
to be run dualstack, and from the router in the network it recieves as
well

# ip -6 r
fe80::/64 dev bond0 proto kernel metric 256 pref medium
default via fe80::2220:ff:fe00:aa dev bond0 proto ra metric 1024 expires 1667sec hoplimit 64 pref medium

If re delete the additional IPv6 default route, then the mount is
again quick, but *still* involves the IPv6 address:

# ip -6 r del default

then

Jul 10 08:56:17 clienthost automount[24683]: handle_packet: type = 3
Jul 10 08:56:17 clienthost automount[24683]: handle_packet_missing_indirect: token 67, name testuser, request pid 25897
Jul 10 08:56:17 clienthost automount[24683]: attempting to mount entry /home/testuser
Jul 10 08:56:17 clienthost automount[24683]: lookup_mount: lookup(program): testuser -> -nosuid,rw,hard,proto=tcp nfs-server:/srv/homes/homes01/testuser
Jul 10 08:56:17 clienthost automount[24683]: lookup_mount: lookup(program): looking up testuser
Jul 10 08:56:17 clienthost automount[24683]: lookup_mount: lookup(program): testuser -> -nosuid,rw,hard,proto=tcp nfs-server:/srv/homes/homes01/testuser
Jul 10 08:56:17 clienthost automount[24683]: parse_mount: parse(sun): expanded entry: -nosuid,rw,hard,proto=tcp nfs-server:/srv/homes/homes01/testuser
Jul 10 08:56:17 clienthost automount[24683]: parse_mount: parse(sun): gathered options: nosuid,rw,hard,proto=tcp
Jul 10 08:56:17 clienthost automount[24683]: parse_mount: parse(sun): dequote("nfs-server:/srv/homes/homes01/testuser") -> nfs-server:/srv/homes/homes01/testuser
Jul 10 08:56:17 clienthost automount[24683]: parse_mount: parse(sun): core of entry: options=nosuid,rw,hard,proto=tcp, loc=nfs-server:/srv/homes/homes01/testuser
Jul 10 08:56:17 clienthost automount[24683]: sun_mount: parse(sun): mounting root /home, mountpoint testuser, what nfs-server:/srv/homes/homes01/testuser, fstype nfs, options nosuid,rw,hard,proto=tcp
Jul 10 08:56:17 clienthost automount[24683]: mount(nfs): root=/home name=testuser what=nfs-server:/srv/homes/homes01/testuser, fstype=nfs, options=nosuid,rw,hard,proto=tcp
Jul 10 08:56:17 clienthost automount[24683]: mount(nfs): nfs options="nosuid,rw,hard,proto=tcp", nobind=0, nosymlink=0, ro=0
Jul 10 08:56:17 clienthost automount[24683]: get_nfs_info: called with host nfs-server(192.168.122.188) proto 6 version 0x20
Jul 10 08:56:17 clienthost automount[24683]: get_nfs_info: nfs v3 rpc ping time: 0.000188
Jul 10 08:56:17 clienthost automount[24683]: get_nfs_info: host nfs-server cost 187 weight 0
Jul 10 08:56:17 clienthost automount[24683]: prune_host_list: selected subset of hosts that support NFS3 over TCP
Jul 10 08:56:17 clienthost automount[24683]: get_supported_ver_and_cost: called with host nfs-server(XXXX:XXXX:XXXX:XXXX::188) version 0x20
Jul 10 08:56:17 clienthost automount[24683]: get_supported_ver_and_cost: rpc ping time 0.000150
Jul 10 08:56:17 clienthost automount[24683]: get_supported_ver_and_cost: cost 149 weight 0
Jul 10 08:56:17 clienthost automount[24683]: mount_mount: mount(nfs): calling mkdir_path /home/testuser
Jul 10 08:56:17 clienthost automount[24683]: mount(nfs): calling mount -t nfs -s -o nosuid,rw,hard,proto=tcp nfs-server:/srv/homes/homes01/testuser /home/testuser
Jul 10 08:56:17 clienthost automount[24683]: mount_mount: mount(nfs): mounted nfs-server:/srv/homes/homes01/testuser on /home/testuser
Jul 10 08:56:17 clienthost automount[24683]: dev_ioctl_send_ready: token = 67
Jul 10 08:56:17 clienthost automount[24683]: mounted /home/testuser

So you might argue likely is it an autofs problem then? Is the bevaviour
regression here considered a users problem (even though the network not fully
under users control), or should IPv6 not be involved? And what about when
proto=tcp is specified, why would IPv6 be involved (note this question/point
was sort of raised as well when commiting
https://git.kernel.org/pub/scm/linux/storage/autofs/autofs.git/commit/?id=c578e5b37c3cf3ff17a4284f9d9269040cb1d975).

So I'm a bit lost if it's solely users's fault or if there is still a bug
involved in autofs.

Thanks again for your time!

Regards,
Salvatore



[Index of Archives]     [Linux Filesystem Development]     [Linux Ext4]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux