On Wed, Nov 23, 2022 at 1:27 AM Mike Christie <michael.christie@xxxxxxxxxx> wrote: > > On 11/22/22 3:30 PM, Wenchao Hao wrote: > > There are 3 iscsi session's startup mode which are onboot, manual and > > automatic. We can boot from iSCSI disks with help of dracut's service > > in initrd, which would set node's startup mode to onboot, then create > > iSCSI sessions. > > > > While the configure of onboot mode is recorded in file of initrd stage > > and would be lost when switch to rootfs. Even if we update the startup > > mode to onboot by hand after switch to rootfs, it is possible that the > > configure would be covered by another discovery command. > > > > root would be mounted on iSCSI disks when boot from iSCSI disks, if the > > sessions is logged out, the related disks would be removed, which would > > cause the whole system halt. > > The userspace tools check for this already don't they? Running iscsiadm > on the root disk returns a failure and message about it being in use. > It seems we did not check. > Userspace can check the session's disks and see if they are mounted and > what they are being used for. It's hard to check if iSCSI disk is in used. If iSCSI disk is used to build multipath device mapper, , and lvm is built on these dm devices, the root is mounted on these lvm devices, like following: sde 8:64 0 60G 0 disk └─360014051a174917ce514486bca53b324 253:4 0 60G 0 mpath ├─lvm-root 253:0 0 38.3G 0 lvm / ├─lvm-swap 253:1 0 2.1G 0 lvm [SWAP] └─lvm-home 253:2 0 18.7G 0 lvm /home It's too coupling to check these dm devices.