Re: raidreconf for 5 x 320GB -> 8 x 320GB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You don't want to use raidreconf unless I'm misunderstanding your goal -
I have also had success with raidreconf but have had data-loss failures
as well (I've posted to the list about it if you search). The data-loss
failures were after I had run tests that showed me it should work.

raidreconf is no longer maintained, so it's a dead end to try to hunt
down the failures.

Luckily, Neil Brown has added raid5 "reshape" support (same thing
raidreconf did) to the md driver, so you can just use 'mdadm --grow'
commands to do what you want.

So I'd say to update your kernel to the newest 2.6.18.xx or whatever is
out, and update mdadm give that a shot with your test partitions. The
new versions are working fine as near as I can tell, and I've got them
in use (FC5 machines - you can see their versions, and call me foolish
for putting FC in production if you want) in a production environment
with no issues.

-Mike

Timo Bernack wrote:
> Hi there,
> 
> i am running a 5-disk RAID5 using mdadm on a suse 10.1 system. As the
> array is running out of space, i consider adding three more HDDs. Before
> i set up the current array, i made a small test with raidreconf:
> 
> - build a 4-disk RAID5 /dev/md0 (with only 1.5gb for each partition)
> with 4.5gb userspace in total
> - put an ext3 filesystem on it
> - copy some data to it -- some episodes of "American Dad" ;-)
> - use raidreconf to add a 5th disk
> - use resize2fs to make use of the new additional space
> - check for the video-clips.. all fine (also compared checksums)
> 
> This test was a full success, but of course it was very small scaled, so
> maybe there are issues that only come up when there is (much) more space
> involved. That leads to my questions:
> 
> What are potential sources for failures (and thus, losing all data)
> reconfiguring the array using the method described above? Loss of power
> during the process (which would take quite some time, 24 hours minimum,
> i think) is one of them, i suppose. But are there known issues with
> raidreconf, concerning the 2TB-barrier, for example?
> 
> I know that raidreconf is quite outdated, but it did what it promised on
> my system. I heard of the possibility to achieve the same result just by
> using mdadm, but this required a newer version of mdadm, and upgrading
> it and using a method that i can't test beforehand scares me a little --
> a little more than letting out raidreconf on my precious data does ;-).
> 
> All comments will be greatly appreciated!
> 
> 
> Timo
> 
> P.S.:
> I do have a backup, but since it is scattered to a huge stack of CDs /
> DVDs (about 660 disks) it would be a terrible pain-in-the-ass to be
> forced to restore it again. In fact, getting away from storing my data
> using a DVD-burner was the main reason to build up the array at all. It
> took me about 1 week (!) to copy all these disks, as you can easily
> imagine.
> 
> -----
> Hardware:
> - Board / CPU: ASUS M2NPV-VM (4 x S-ATA onboard) / AMD Sempron 3200+ AM2
> - Add. S-ATA-Controller: Promise SATA300 TX4
> - HDDs: 5 x Western Digital Caviar SE 320GB SATA II (WD3200JS)
> 
> Software (OpenSUSE 10.1 Default-Installation):
> - Kernel: 2.6.16
> - mdadm - v2.2 - 5 December 2005
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux