On 17/1/17 6:36 am, Stephen Morris wrote:
Hi,
My Nas device now fails to mount at boot time via the CIFS
definition in fstab but the corresponding NFS definition mounts quite
happily. Also after the system comes up and I log into KDE I can
manually mount the CIFS device. As far as I am aware the only
difference between when it was mounting at boot time and now is
several system updates, also the system update I did yesterday morning
(which updated several hundred packages, which included a new kernel)
has not rectified the issue. The systemctl output is below, I have
blanked out the userid and password for security reasons.
Does anyone have any ideas why this has now stopped working?
systemctl status mnt-nas.mount
● mnt-nas.mount - /mnt/nas
Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-01-17 06:40:15
AEDT; 40min ago
Where: /mnt/nas
What: //192.168.1.12/Volume_1
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Process: 1299 ExecMount=/usr/bin/mount //192.168.1.12/Volume_1
/mnt/nas -t cifs -o
username=********,password=********,cache=strict,_netdev,rw
(code=exited, status=32)
Jan 17 06:40:15 localhost.localdomain systemd[1]: Mounting /mnt/nas...
Jan 17 06:40:15 localhost.localdomain mount[1299]: mount error(101):
Network is unreachable
Jan 17 06:40:15 localhost.localdomain mount[1299]: Refer to the
mount.cifs(8) manual page (e.g. man mount.cifs)
Jan 17 06:40:15 localhost.localdomain systemd[1]: mnt-nas.mount: Mount
process exited, code=exited status=32
Jan 17 06:40:15 localhost.localdomain systemd[1]: Failed to mount
/mnt/nas.
Jan 17 06:40:15 localhost.localdomain systemd[1]: mnt-nas.mount: Unit
entered failed state.
I have also listed below the fstab definition for the CIFS interface
and the NFS interface.
192.168.1.12:/mnt/HD/HD_a2 /mnt/nfs nfs
users,noatime,nolock,bg,sec=sys,tcp,timeo=1800,_netdev,rw 0 0
//192.168.1.12/Volume_1 /mnt/nas cifs
auto,username=********,password=********,cache=strict,_netdev,rw 0 0
This issue is now getting ridiculous. Yesterday morning when I booted
after a cold start the CIFS mount point failed to be mounted, apparently
because the network was unavailable, but when I booted this morning
after a cold start the CIFS mount point was mounted quite happily. Given
that both the CIFS and NFS mount points are being mounted in parallel it
is now potentially looking like SYSTEMD is problematic in its ability to
handle those mounts in parallel properly.
Is there any way to disable SYSTEMD to test this theory, or
alternatively, is it possible to force the two mounts to be serialized?
If it is having issues with these two mounts, why these two only, and
not the seven linux partitions which are also being mounted in fstab?
regards,
Steve
regards,
Steve
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx