[resend -- I hate gmail] On Fri, Aug 15, 2014 at 7:19 AM, Felix Miata <mrmazda@xxxxxxxxxxxxx> wrote: > On 2014-08-14 14:33 (GMT-0700) Andrew Lutomirski composed: > >> Felix Miata wrote: > >>> On 2014-08-14 12:36 (GMT-0700) Andrew Lutomirski composed: > >>>> J. Bruce Fields wrote: > >>>>> On Tue, Aug 12, 2014 at 12:58:13AM -0400, Felix Miata wrote: > >>>>>> Why when nothing is automounting nfs either as client or server does boot >>>>>> not proceed to completion without a 2+ minute pause while nfs-server fails >>>>>> to start? > >>>>> Sounds like a bug. Is the failure to start expected? > >>> Not expected by me. > >>>> I wonder whether https://bugzilla.redhat.com/show_bug.cgi?id=1129425 is >>>> related. It's possible for nfs client mounts to start a copy of rpc.statd >>>> that systemd doesn't know about. > >>> I can't tell whether that's related or not. The only significant NFS problem >>> I can remember before this, once I originally figured out what little I >>> needed to figure out about NFS eons ago when setting up a LAN for the first >>> time, had to do with rpcbind replacing portmap. > > I miswrote, forgetting about other installations where the nfs delay is on trying to mount after boot instead of nfs delay booting: > https://lists.fedoraproject.org/pipermail/test/2014-May/121461.html > >> If you wait long enough for boot to finish, and you do: > >> # ps -A |grep rpc >> # systemctl status nfs-lock > >> you might get a hint. > > I'm not sure whether I'm seeing any hint(s) or not: > ## F21 host gx280 > # nfs-utils-1.3.0-5.0.fc21.i686 > This is not the result I would have expected if that bug caused it. I would have expected to see rpc.statd running but nfs-lock in some kind of not-running or failed state. --Andy > ## before manual nfs mountings > > # dmesg | tail > [ 40.430796] cfg80211: Calling CRDA to update world regulatory domain > [ 40.497841] EXT4-fs (sda15): mounted filesystem with ordered data mode. Opts: acl > [ 43.811184] tg3 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex > [ 43.811364] tg3 0000:02:00.0 eth0: Flow control is on for TX and on for RX > [ 109.308166] NFSD: starting 90-second grace period (net c0d4b180) > [ 109.376839] systemd-journald[250]: Received request to flush runtime journal from PID 1 > # ps -A | grep rpc > 268 ? 00:00:00 rpciod > 426 ? 00:00:00 rpc.idmapd > 455 ? 00:00:00 rpc.statd > 463 ? 00:00:00 rpcbind > 483 ? 00:00:00 rpc.mountd > # systemctl status nfs-lock > ● rpc-statd.service - NFS status monitor for NFSv2/3 locking. > Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; enabled) > Active: active (running) since Fri 2014-08-15 09:42:03 EDT; 9min ago > Process: 454 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS) > Main PID: 455 (rpc.statd) > CGroup: /system.slice/rpc-statd.service > └─455 /usr/sbin/rpc.statd --no-notify > > Aug 15 09:42:02 gx280 rpc.statd[455]: Version 1.3.0 starting > Aug 15 09:42:02 gx280 rpc.statd[455]: Flags: TI-RPC > Aug 15 09:42:03 gx280 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. > > ## after manual nfs mountings > > # dmesg | tail > [ 43.811184] tg3 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex > [ 43.811364] tg3 0000:02:00.0 eth0: Flow control is on for TX and on for RX > [ 109.308166] NFSD: starting 90-second grace period (net c0d4b180) > [ 109.376839] systemd-journald[250]: Received request to flush runtime journal from PID 1 > [ 596.937403] FS-Cache: Loaded > [ 596.973968] FS-Cache: Netfs 'nfs' registered for caching > [ 596.985813] Key type dns_resolver registered > [ 597.034781] NFS: Registering the id_resolver key type > [ 597.037038] Key type id_resolver registered > [ 597.039149] Key type id_legacy registered > # ps -A | grep rpc > 268 ? 00:00:00 rpciod > 426 ? 00:00:00 rpc.idmapd > 455 ? 00:00:00 rpc.statd > 463 ? 00:00:00 rpcbind > 483 ? 00:00:00 rpc.mountd > # systemctl status nfs-lock > ● rpc-statd.service - NFS status monitor for NFSv2/3 locking. > Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; enabled) > Active: active (running) since Fri 2014-08-15 09:42:03 EDT; 12min ago > Process: 454 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS) > Main PID: 455 (rpc.statd) > CGroup: /system.slice/rpc-statd.service > └─455 /usr/sbin/rpc.statd --no-notify > > Aug 15 09:42:02 gx280 rpc.statd[455]: Version 1.3.0 starting > Aug 15 09:42:02 gx280 rpc.statd[455]: Flags: TI-RPC > Aug 15 09:42:03 gx280 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. > > (4 nfs mounts total) > > ## F22 host gx280 > # Linux gx280 3.16.0-1.fc22.i686+PAE #1 SMP Mon Aug 4 10:21:21 UTC 2014 i686 i686 i386 GNU/Linux > # nfs-utils-1.3.0-5.0.fc22.i686 > > ## before manual nfs mountings > > # dmesg | tail > [ 39.375123] cfg80211: Calling CRDA to update world regulatory domain > [ 40.081581] EXT4-fs (sda15): mounted filesystem with ordered data mode. Opts: acl > [ 40.377518] EXT4-fs (sda20): mounting ext3 file system using the ext4 subsystem > [ 41.025282] EXT4-fs (sda20): mounted filesystem with ordered data mode. Opts: acl > [ 43.523208] tg3 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex > [ 43.523390] tg3 0000:02:00.0 eth0: Flow control is on for TX and on for RX > [ 109.407229] systemd-journald[252]: Received request to flush runtime journal from PID 1 > # ps -A | grep rpc > 260 ? 00:00:00 rpciod > 424 ? 00:00:00 rpc.idmapd > 460 ? 00:00:00 rpc.statd > 466 ? 00:00:00 rpcbind > 486 ? 00:00:00 rpc.mountd > # systemctl status nfs-lock > ● rpc-statd.service - NFS status monitor for NFSv2/3 locking. > Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; enabled) > Active: active (running) since Fri 2014-08-15 10:00:32 EDT; 3min 20s ago > Process: 457 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS) > Main PID: 460 (rpc.statd) > CGroup: /system.slice/rpc-statd.service > └─460 /usr/sbin/rpc.statd --no-notify > > Aug 15 10:00:32 gx280 rpc.statd[460]: Version 1.3.0 starting > Aug 15 10:00:32 gx280 rpc.statd[460]: Flags: TI-RPC > Aug 15 10:00:32 gx280 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. > > ## after manual nfs mountings > > # dmesg | tail > [ 41.025282] EXT4-fs (sda20): mounted filesystem with ordered data mode. Opts: acl > [ 43.523208] tg3 0000:02:00.0 eth0: Link is up at 1000 Mbps, full duplex > [ 43.523390] tg3 0000:02:00.0 eth0: Flow control is on for TX and on for RX > [ 109.407229] systemd-journald[252]: Received request to flush runtime journal from PID 1 > [ 254.048277] FS-Cache: Loaded > [ 254.095219] FS-Cache: Netfs 'nfs' registered for caching > [ 254.116330] Key type dns_resolver registered > [ 254.164242] NFS: Registering the id_resolver key type > [ 254.164390] Key type id_resolver registered > [ 254.164468] Key type id_legacy registered > # ps -A | grep rpc > 260 ? 00:00:00 rpciod > 424 ? 00:00:00 rpc.idmapd > 460 ? 00:00:00 rpc.statd > 466 ? 00:00:00 rpcbind > 486 ? 00:00:00 rpc.mountd > # systemctl status nfs-lock > ● rpc-statd.service - NFS status monitor for NFSv2/3 locking. > Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; enabled) > Active: active (running) since Fri 2014-08-15 10:00:32 EDT; 3min 46s ago > Process: 457 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS) > Main PID: 460 (rpc.statd) > CGroup: /system.slice/rpc-statd.service > └─460 /usr/sbin/rpc.statd --no-notify > > Aug 15 10:00:32 gx280 rpc.statd[460]: Version 1.3.0 starting > Aug 15 10:00:32 gx280 rpc.statd[460]: Flags: TI-RPC > Aug 15 10:00:32 gx280 systemd[1]: Started NFS status monitor for NFSv2/3 locking.. > > (4 nfs mounts total) > -- > "The wise are known for their understanding, and pleasant > words are persuasive." Proverbs 16:21 (New Living Translation) > > Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! > > Felix Miata *** http://fm.no-ip.com/ > -- > devel mailing list > devel@xxxxxxxxxxxxxxxxxxxxxxx > https://admin.fedoraproject.org/mailman/listinfo/devel > Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct