I've never done full disk RAID1. Always, done it with
partitions.
fdisk -l /dev/sda (and /dev/sdb) looks like this:
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sda1 2048 97722367 97720320 46.6G Linux RAID
/dev/sda2 97722368 99809279 2086912 1019M Linux RAID
/dev/sda3 99809280 101040127 1230848 601M Linux RAID
/dev/sda4 101040128 3907028991 3805988864 1.8T Linux RAID
mdadm:
# -l = level
# -n = raid-devices
# -e = metadata
# -b = bitmap
mdadm -C /dev/md127 --homehost=myserver.example.com -n
2 -l 1 -e 1.2 -b internal /dev/sda1 /dev/sdb1
cat /proc/mdstat:
Personalities : [raid1]
md127 : active raid1 sdb2[1] sda2[0]
1042432 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
mdadm --detail /dev/md127:
UUID :
e00525a0:a3a5bfc8:ebe4587b:8489d910
mdadm.conf (use the UUID from mdadm --detail):
ARRAY /dev/md/boot level=raid1
num-devices=2 UUID=e00525a0:a3a5bfc8:ebe4587b:8489d910
You don't need to wait for the sync to complete.
format the partition:
$ mkfs.xfs
-L BOOT /dev/md127
# get uuid
$ xfs_admin -u /dev/md127
UUID = 9385d42a-d661-494c-ba1d-b3cad4420b77
# get label
$ xfs_admin -l /dev/md127
label = "BOOT"
fstab (use the UUID from xfs admin -u):
#
device
mount type options
dump fsck
#
point
pgm order
#LABEL=BOOT /dev/md127
UUID=9385d42a-d661-494c-ba1d-b3cad4420b77
/boot xfs defaults
0 0
Instead of UUID, can use /dev/md127
df:
Filesystem Size Used Avail Use%
Mounted on
/dev/md125 1.8T 279G 1.5T 16% /
/dev/md127 1013M 317M 696M 32% /boot
/dev/md21 931G 40G 892G 5% /ssd
/dev/md124 600M 8.5M 592M 2% /boot/efi
/dev/md32 9.1T 1.5T 7.6T 17% /lan
/dev/md42 9.1T 399G 8.7T 5% /bacula
Hope this helps,
Bill
Still getting the hang of md. I had it working for several days (2 disks in RAID1 config) but after a system update and reboot, it suddenly shows no data: ]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT [...] sdd 8:48 0 931.5G 0 disk └─md127 9:127 0 931.4G 0 raid1 └─md127p1 259:0 0 931.4G 0 part sde 8:64 0 931.5G 0 disk └─md127 9:127 0 931.4G 0 raid1 └─md127p1 259:0 0 931.4G 0 part # mdadm --detail /dev/md127p1 /dev/md127p1: Version : 1.2 Creation Time : Wed May 20 16:34:58 2020 Raid Level : raid1 Array Size : 976628736 (931.39 GiB 1000.07 GB) Used Dev Size : 976630464 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun May 24 12:29:54 2020 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : Bree:0 (local to host Bree) UUID : ba979f01:7f1dbe79:24f19f68:7ba6000c Events : 22436 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde # mount /dev/md127p1 /raid # ls /raid How is this possible? The only thing that touches the array is a borg backup run from crontab, which I have verified is working correctly, including just before the update and reboot this morning. It looks as if the mount is mounting the wrong thing. Or am I missing something very obvious? poc _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx