Mounting a new device to a mount point with an old (auto-generated) but inactive mount unit triggers an immediate unmount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello systemd-devel,


we recently replaced a broken drive on a server and ran into a strange issue in regards to a mount ....



1) It started with the old device not being reachable anymore and therefore the crypt setup and mount was just failing:

--- cut ---
[...]

Jul  2 03:41:11 myserver systemd[1]: dev-disk-by\x2duuid-ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b.device: Job dev-disk-by\x2duuid-ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b.device/start timed out. Jul  2 03:41:11 myserver systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b.device. Jul  2 03:41:11 myserver systemd[1]: Dependency failed for Cryptography Setup for ee386599-8235-4d4d-9d3e-901ccf2eed4b_crypt. Jul  2 03:41:11 myserver systemd[1]: systemd-cryptsetup@ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b_crypt.service: Job systemd-cryptsetup@ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b_crypt.service/start failed with result 'dependency'. Jul  2 03:41:11 myserver systemd[1]: dev-disk-by\x2duuid-ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b.device: Job dev-disk-by\x2duuid-ee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b.device/start failed with result 'timeout'. Jul  2 03:41:11 myserver systemd[1]: dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device: Job dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device/start timed out. Jul  2 03:41:11 myserver systemd[1]: Timed out waiting for device dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device. Jul  2 03:41:11 myserver systemd[1]: Dependency failed for /var/lib/ceph/osd/ceph-64. Jul  2 03:41:11 myserver systemd[1]: var-lib-ceph-osd-ceph\x2d64.mount: Job var-lib-ceph-osd-ceph\x2d64.mount/start failed with result 'dependency'. Jul  2 03:41:11 myserver systemd[1]: dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device: Job dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device/start failed with result 'timeout'.

[...]
--- cut ---


2) We then replaced the drive, created a new Luks device and created a new XFS filesystem on top of it.


3) We then updated the fstab to point to the new device name (ceph-$UUID of the new luks device) and triggered am immediate "mount" via Ansible
The fstab entry now looks like this:

--- cut ---
/dev/ceph-8ce1a4e6-94dd-4fa8-a7b9-310ab211b54a/data /var/lib/ceph/osd/ceph-64 xfs nofail 0 0

--- cut ---


and the systemd auto-generated mount unit like so:

--- cut ---

# /run/systemd/generator/var-lib-ceph-osd-ceph\x2d64.mount
# Automatically generated by systemd-fstab-generator

[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5) man:systemd-fstab-generator(8)

[Mount]
Where=/var/lib/ceph/osd/ceph-64
What=/dev/ceph-8ce1a4e6-94dd-4fa8-a7b9-310ab211b54a/data
Type=xfs
Options=nofail
--- cut ---


What happend was, that the mount succeeded initially, but then systemd unmounted the path right away, apparently because the old (auto-generated) mount unit was still present, in "inactive" state:

--- cut ---

Jul  2 13:17:04 myserver ansible-mount: Invoked with src=/dev/ceph-8ce1a4e6-94dd-4fa8-a7b9-310ab211b54a/data path=/var/lib/ceph/osd/ceph-64 fstype=xfs state=mounted opts=nofail boot=True backup=False dump=None fstab=None passno=None Jul  2 13:17:04 myserver kernel: [367734.361658] XFS (dm-25): Mounting V5 Filesystem Jul  2 13:17:04 myserver kernel: [367734.373636] XFS (dm-25): Ending clean mount Jul  2 13:17:04 myserver systemd[1]: var-lib-ceph-osd-ceph\x2d64.mount: Unit is bound to inactive unit dev-ceph\x2dee386599\x2d8235\x2d4d4d\x2d9d3e\x2d901ccf2eed4b-data.device. Stopping, too.
Jul  2 13:17:04 myserver systemd[1]: Unmounting /var/lib/ceph/osd/ceph-64...
Jul  2 13:17:05 myserver systemd[1]: Unmounted /var/lib/ceph/osd/ceph-64.
Jul  2 13:17:05 myserver kernel: [367734.413530] XFS (dm-25): Unmounting Filesystem

--- cut ---



I did find the issue https://github.com/systemd/systemd/issues/1741 which sounds quite similar, but that is closed for comments unfortunately. There a supposed "workaround" apparently is to call "systemd daemon-reload" prior to mounting? Is this the proper way to update the auto-generated mount-units then?

Or did we handle this whole process of replacing a device and then having a new one mounted at the same mount point wrong in any way? As explained, we were not handling any systemd mount units directly, but only modified the fstab.



Thanks and with kind Regards


Christian

_______________________________________________
systemd-devel mailing list
systemd-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/systemd-devel




[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux