RAID5 Shrinking array-size nearly killed the system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using Ubuntu 10.10 mdadm v 3.2. ext4 filesystem.ÂI wanted to shrink
from 6 disks to 4. I have about 2TB of files on the disks. So, I ran
$ sudo mdadm -G -n 4 /dev/md0
which gave the message:

mdadm: this change will reduce the size of the array.
ÂÂ Â Â use --grow --array-size first to truncate array.
ÂÂ Â Â e.g. mdadm --grow /dev/md0 --array-size 5857612608

then ran
$ sudo mdadm --grow /dev/md0 --array-size 5857612608
and started testing the filesystem prior to reducing the array. I
quickly found out that the filesystem was broken. It was broken enough
that I couldn't even get access to commands, including sudo, mdadm,
reboot, and ls. I had to power down the system and restart it. There
were a number of disk errors, but managed to restart the system and it
looks like there wasn't much damage. fstab is listed after my
questions.
Questions:
1. Is there a safer way to shrink the file system prior to reducing
the number of disks in the array?
2. Is there a way of rearranging the files and directories to make the
file system shrink safer?
3. Is there something I did that caused the crash?
Thanks-Rory


# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> Â <type> Â<options> Â Â Â <dump> Â<pass>
proc      Â/proc      proc  Ânodev,noexec,nosuid 0    0
# / was on /dev/md0 during installation
UUID=9a978b70-e034-4d79-9e1d-237a67b553d5 / Â Â Â Â Â Â Â ext4
commit=60,errors=remount-ro,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv1,barrier=0
Â0 Â Â Â 1
# /boot was on /dev/sdb1 during installation
UUID=5beb5144-6d2f-4a73-b9e9-442355d8f529 /boot      ext2
defaults    Â0    2
# swap was on /dev/sda1 during installation
UUID=31660a21-3f99-4ffb-81cc-501dc6ce5de7 none      Âswap  Âsw
     Â0    0
# swap was on /dev/sdc1 during installation
UUID=f168588d-8b9e-45c3-b9ae-f90f66906616 none      Âswap  Âsw
     Â0    0
# swap was on /dev/sdd1 during installation
UUID=05cadc48-59df-479e-b5ed-b9e9322cb905 none      Âswap  Âsw
     Â0    0
# swap was on /dev/sde1 during installation
UUID=61fba94d-e6c5-4a58-b0cd-9d878b55b65c none      Âswap  Âsw
     Â0    0
# swap was on /dev/sdf1 during installation
UUID=47737641-7555-4cbc-9bf6-508c9f2035bc none      Âswap  Âsw
     Â0    0
# swap was on /dev/sdg1 during installation
UUID=ad06f3d6-a6ec-445a-bcfb-427fec72725b none      Âswap  Âsw
     Â0    0
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux