Re: “root account locked” after removing one RAID1 hard disc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/11/2020 08:44, c.buhtz@xxxxxxxxx wrote:
X-Post: https://serverfault.com/q/1044339/374973

I tried this out in a VirtualMachine to hope I can learn something.

**Problem**

The RAID1 does not containt any systmem relevant data - the OS is on another drive. My Debian 10 does not boot anymore and tells me that I am in emergency mode and "Cannot open access to console, the root account is locked.". I removed one of the two RAID1 devices before.

I don't think this is specific to raid ...

And systemd tells me while booting "A start job is running for /dev/md127".

**Details***

The virtual machine contains three hard disks. /dev/sda1 use the full size of the disc and containts the Debian 10. /dev/sdb and /dev/sdc (as discs without partitions) are configured as RAID1 /dev/md127 and formated with ext4 and mounted to /Daten. I can read and write without any problems to the RAID.

I regualr shutdown and then removed /dev/sdc. After that the system does not boot anymore and shows me the error about the locked root account.

**Question 1**

Why is the system so sensible about one RAID device that does not contain essential data for the boot process. I would I understand if there is a error messages somewhere. But blocking the whole boot process is to much in my understanding.

It's not. It's sensitive to the fact that ANY disk is missing.

**Question 2**

I read that a single RAID1 device (the second is missing) can be accessed without any problems. How can I do that?

When a component of a raid disappears without warning, the raid will refuse to assemble properly on next boot. You need to get at a command line and force-assemble it.

**More details**

Here is the output of my fdisk -l. Interesting here is that /dv/md127 is shown but without its filesysxtem.

Disk /dev/sda: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: VBOX HARDDISK
Disklabel type: dos
Disk identifier: 0xe3add51d

Device     Boot Start       End   Sectors  Size Id Type
/dev/sda1  *     2048 266338303 266336256  127G 83 Linux


Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk model: VBOX HARDDISK

Disk /dev/sdc: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk model: VBOX HARDDISK

Disk /dev/md127: 8 GiB, 8580497408 bytes, 16758784 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Here is mount output:

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro)
/dev/md127 on /Daten type ext4 (rw,relatime)

And here is at least part of your problem. If the mount fails, systemd will halt and chuck you into a recovery console. I had exactly the same problem with an NTFS partition on a dual-boot system.

This is /etc/fstab:

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=65ec95df-f83f-454e-b7bd-7008d8055d23 /               ext4 errors=remount-ro 0       1

/dev/md127  /Daten      ext4    defaults    0   0


Is root's home on /Daten? It shouldn't be.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux