RE: raidreconf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Jakob &PSgr;stergaard
> Sent: Thursday, February 07, 2002 6:14 PM
> To: Fredrik Lindgren
> Cc: linux-raid@vger.kernel.org
> Subject: Re: raidreconf
> 
> On Thu, Feb 07, 2002 at 01:12:04PM +0100, Fredrik Lindgren wrote:
> > > > So, if it took 3 hours to add 50 gb to a 60 gb array, a rough
> > > > calculation says it will take about 19 hours to add 100 gb
> > > > to a 380 gb array, which is close to your calculations, yet
> > > > still lower.
> > >
> > > My calculations was a very rough estimate - perhaps you will
> > > see a performance degradation when moving to larger
> > > partitions (because average seek time will be slightly
> > > higher), but I'm not sure if that is going to be noticable at all.
> >
> > I did run raidreconf on a 560Gb array (8x80Gb RAID5) adding another
> > disk taking it to 640Gb. Machine was an Athlon 1200Mhz with 512Mb
> > RAM. All disks on diffrent IDE channels. Samsung 5400rpm disks.
> > Estimated runtime was 29 hours, it had been going for about 21h when
> > it crashed because of bad sectors on the new disk (duh!). At that
> > point it was 70%+ complete so the estimate was pretty accurate
> > in this case.
> 
> Thanks a lot for the feedback !
> 
> Tough luck with the data though  ;)

In the past, I had a few incidents where the kernel complained about bad
sectors on one of the disks in the array. However, since the filesystem
on top of md0 is reiserfs, since there wasn't any tool for reiserfs to
efficiently check/mark bad blocks, since doing a read+write test with
badblocks on such a large filesystem is not efficient in itself (it
takes hours for a few gigabytes, it would take many days for such a fs,
and since it was only a relatively small number of blocks, I didn't do
anything about it. If raidreconf comes across these sectors, reading
from them may or may not fail, however writing on them will most likely
fail. Would raidreconf crash under such circumstances? Or would it fail
and continue with the conversion?

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux