Healthy RAID10 always mounts under Ubuntu 16.04, but not under Ubuntu 18.04.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

Under Linux Kernel 4.4.0-127-generic, Ubuntu 16.04 LTS Xenial Xerus, I
successfully created a healthy raid10 using 4x4Tib, LVM2 and ext4.

When I boot into a USB keyfob installed with Ubuntu 18.04 LTS Bionic
Beaver, Linux Kernel 4.15.0-20-generic #21-Ubuntu SMP, the system cannot
find the LVM2 physical volume.

Under Ubuntu 18.04 (USB keyfob), I run:

$ apt install mdadm
...
$ mdadm --verbose --assemble /dev/md0  # Works!
$ cat /proc/mdstat                     # Shows raid.
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6]
[raid5] [raid4]
md0 : active raid10 sdh[0] sdb[1] sdc[2] sdd[3]
      7813774336 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

$ pvscan  # Shows nothing!
$

Why can't pvscan find the physical volume?  Remember, this is a %100
healthy working RAID10.  If I remove the USB keyfob and reboot into my
Ubuntu 16.04 LTS system, the whole drive (ext4) gets automatically
mounted properly, all the time without any errors.  It works flawlessly
under Linux Kernel 4.4.0-127-generic, but not with Linux Kernel
4.15.0-20-generic #21.

On the web, I find only people having defective RAIDs when pvscan
returns nothing.  Is there a way to force pvscan/LVM2 to consider /dev/md0?

Unfortunately, I cannot upgrade to Ubuntu 18.04/Linux Kernel 4.15.0
until this is resolved.

Best regards,
Hans Deragon

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux