i've replaced a working RAID-10 array's 4x3TB drives with new 4x4TB drives.
i run
uname -rm
6.12.10-200.fc41.x86_64 x86_64
mdadm --version
mdadm - v4.3 - 2024-02-15
the array's been rebuilt -- replacing one drive at a time -- & is up
cat /proc/mdstat
Personalities : [raid1] [raid10]
md2 : active raid10 sdl1[4] sdk1[7] sdn1[5] sdm1[6]
5860268032 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]
bitmap: 0/44 pages [0KB], 65536KB chunk
mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Tue Oct 31 22:29:24 2017
Raid Level : raid10
Array Size : 5860268032 (5.46 TiB 6.00 TB)
Used Dev Size : 2930134016 (2.73 TiB 3.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Jan 29 10:43:38 2025
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : far=2
Chunk Size : 512K
Consistency Policy : bitmap
Name : nas:2
UUID : c...
Events : 14122
Number Major Minor RaidDevice State
4 8 177 0 active sync /dev/sdl1
6 8 193 1 active sync /dev/sdm1
5 8 209 2 active sync /dev/sdn1
7 8 161 3 active sync /dev/sdk1
i'm now attempting to resize the array to use the new/full space.
i have
pvs
PV VG Fmt Attr PSize PFree
/dev/md2 VG_N1 lvm2 a-- <5.46t 0
/dev/md3 VG_N1 lvm2 a-- <5.46t 0
vgs
VG #PV #LV #SN Attr VSize VFree
VG_N1 2 1 0 wz--n- <10.92t 0
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LV_N1 VG_N1 -wi-a----- <10.92t
after resizing the drive partitions
fdisk -l /dev/sdk
Disk /dev/sdk: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFPX-68C
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 16773120 bytes
Disklabel type: gpt
Disk identifier: 3...
Device Start End Sectors Size Type
/dev/sdk1 2048 7814037134 7814035087 3.6T Linux RAID
fdisk -l /dev/sd[lmn] | grep "Linux RAID"
/dev/sdl1 2048 7814037134 7814035087 3.6T Linux RAID
/dev/sdm1 2048 7814037134 7814035087 3.6T Linux RAID
/dev/sdn1 2048 7814037134 7814035087 3.6T Linux RAID
then
pvresize -v /dev/md2
Resizing volume "/dev/md2" to 11720536064 sectors.
No change to size of physical volume /dev/md2.
Updating physical volume "/dev/md2"
Archiving volume group "VG_N1" metadata (seqno 12).
Physical volume "/dev/md2" changed
Creating volume group backup "/etc/lvm/backup/VG_N1" (seqno 13).
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
pvs STILL returns
pvs
PV VG Fmt Attr PSize PFree
/dev/md2 VG_N1 lvm2 a-- <5.46t 0 <--------- NOT expanded
/dev/md3 VG_N1 lvm2 a-- <5.46t 0
attempt to `--grow` the array doesn't succeed
umount /dev/VG_N1/LV_N1
mdadm --stop /dev/md2
mdadm --assemble --force /dev/md2 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1
lvchange -an /dev/VG_N1/LV_N1
vgchange -an /dev/VG_N1
mdadm --grow /dev/md2 --raid-devices=4 --size=max --force
mdadm: cannot change component size at the same time as other changes.
Change size first, then check data is intact before making other changes.
what's incorrect/missing in that procedure? how do i get the full partitions to be used by the array?