On Wed, 20 Mar 2024 23:15:34 +0800 d tbsky <tbskyd@xxxxxxxxx> wrote: > Hi: > today I want to install RHEL 9.3 with mdadm software raid1 "/boot" > partition to a server. installation failed with message "failed to > write boot loader configuration". > > I switched to console and "dmesg" showed a lot of errors about "rtc > write failed with error -22". I checked the system time and found > someone set the server to year "2223". I correct the time to year > "2024" and reinstall RHEL 9.3 with the same disk layout (eg: I didn't > recreate mdadm raid since it will need extra steps). and again > installation failed with the same error message. > > I was curious so I checked what happened. I found md-uuid string is > reversed from "/dev/disk/by-id" and mdadm itself. Below are some > strange results. Maybe the issue is not important and people in the > far future will fix it someday if we don't kill the bug. Just share > the experience. > > >ls -la /dev/disk/by-id | grep md-uuid > lrwxrwxrwx 1 root root 11 Mar 20 03:10 > md-uuid-a4e266d2:68ae1848:1a6d6a71:a419ebdb -> ../../md127 > > >mdadm --examine --scan > ARRAY /dev/md/boot metadata=1.2 > UUID=d266e2a4:4818ae68:716a6d1a:dbeb19a4 > name=localhost.localdomain:boot > > >mdadm -E /dev/sda2 (result show created at year 2223) > /dev/sda2: > Magic : a92b4efc > Version : 1.2 > Feature Map : 0x1 > Array UUID : d266e2a4:4818ae68:716a6d1a:dbeb19a4 > Name : localhost.localdomain:boot (local to host > localhost.localdomain) > Creation Time : Fri Nov 14 07:32:22 2223 > Raid Level : raid1 > Raid Devices : 5 > > Avail Dev Size : 1048576 sectors (512.00 MiB 536.87 MB) > Array Size : 524288 KiB (512.00 MiB 536.87 MB) > Data Offset : 2048 sectors > Super Offset : 8 sectors > Unused Space : before=1968 sectors, after=0 sectors > State : clean > Device UUID : 4007990e:44762c79:efab3543:04a55382 > > Internal Bitmap : 8 sectors from superblock > Update Time : Wed Mar 20 03:07:49 2024 > Bad Block Log : 512 entries available at offset 16 sectors > Checksum : 87a9793f - correct > Events : 38 > > > Device Role : Active device 3 > Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) > > > >mdadm --details /dev/md127 (result show created at year 2106 which is not > >correct) > /dev/md127: > Version : 1.2 > Creation Time : Sun Feb 7 06:28:15 2106 > Raid Level : raid1 > Array Size : 524288 (512.00 MiB 536.87 MB) > Used Dev Size : 524288 (512.00 MiB 536.87 MB) > Raid Devices : 5 > Total Devices : 5 > Persistence : Superblock is persistent > > Intent Bitmap : Internal > > Update Time : Wed Mar 20 03:07:49 2024 > State : clean > Active Devices : 5 > Working Devices : 5 > Failed Devices : 0 > Spare Devices : 0 > > Consistency Policy : bitmap > > Number Major Minor RaidDevice State > 0 8 50 0 active sync /dev/sdd2 > 1 8 18 1 active sync /dev/sdb2 > 2 8 34 2 active sync /dev/sdc2 > 3 8 2 3 active sync /dev/sda2 > 4 8 66 4 active sync /dev/sde2 > Hi, There could be a regression in upstream for mdadm --detail --export. See proposed fix: https://patchwork.kernel.org/project/linux-raid/patch/20240318151930.8218-3-mariusz.tkaczyk@xxxxxxxxxxxxxxx/ There are no comments so I will merge fix soon. Xiao, Could you please check RHEL 9.3 and eventually revert the patch in z-stream? Thanks, Mariusz