RE: raidreconf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello again.
I am just writing to inform you about the results of some tests I ran
today, since, as I noticed, you wrote that feedback from our experience
with raidreconf is welcome.
Even though this is getting boring, I must note again that your answers
have been very helpful and enlightening :)

> -----Original Message-----
> From: Jakob &PSgr;stergaard [mailto:jakob@unthought.net]
> Sent: Thursday, February 07, 2002 12:01 AM
> To: Cajoline
> Cc: linux-raid@vger.kernel.org
> Subject: Re: raidreconf
> 
> On Wed, Feb 06, 2002 at 02:11:08AM +0200, Cajoline wrote:
> ...
> > So this means I have to maintain the device sequence in the new
> > arrangement, right? I hope this doesn't sound too naive, but how
will
> > the raid layer know the new location of each partition?
> 
> No - All superblocks on the partitions participating in some specific
> array
> will have a "unique ID".   This will identify all superblocks
belonging to
> the same array, among the possible myriad of partitions with
superblocks
> you may have on your system (with many arrays).
> 
> Each superblock also contains information like "I'm disk 0 in this
array",
> or "I'm disk 3 in this array"  -  whether the kernel thinks the
partition
> name is /dev/hda5 or /dev/hde5 is of no importance to the RAID layer.
> 
> Shuffle the disks around as you please (as long as your arrays use
> persistent
> superblocks!)

Thanks, that clears up everything. I tried moving some devices around,
to different slots, and so far it only worked some times, while other
times it failed, the raid driver couldn't find the partition that was on
the moved disk. Moving the drives onto the same channel (from one drive
per channel per controller) worked, as well as switching them around
between channels and master/slave on the same channel. However,
strangely enough, moving the drive from the onboard ide controller to
one of the Promise controllers failed. The kernel would find the drive
and the partition on it, but the raid driver wouldn't. I thought this
had to do with the sequence of the drives or something close to that,
but now I understand it is probably a problem related to the controllers
and the chipsets.
 
> > I decided to do a test tonight with a few drives I could repartition
> > just for this. I used two drives, 80gb and 100gb, made two
equal-size
> > partitions on each, and made an array with the 2 40gb and the 1 50gb
> > partition. I copied something to it, filled about 1.5gb, and then
tried
> > raidreconf to add the second 50gb partition to the array. The box
had
> > 224mb RAM and a Duron 850 MHz processor.
> 
> Ok, that's a decent portion of memory.
> 
> > It gave an estimate of about 12 hours to complete the process. I let
it
> > continue for about 2 hours before I aborted, and it hadn't reached
20%
> > yet. I am not sure how much memory it used, unfortunately I just
didn't
> > look.
> 
> raidreconf will use the majority of memory for it's own "gift buffer",
> but leave some for the kernel to use (for buffers/caches) as it sees
fit.
> 
> One thing is certain: all memory is going to be used for *some*
purpose
> during a raidreconf run.
> 
> >
> > Is this estimation accurate or close to accurate? So could I assume
that
> > adding a 100gb disk to a 380gb array would take a multiple of that
time
> > to complete, perhaps more than 48 hours?
> 
> The estimate needs a little time to "stabilize", but it should be a
fair
> estimate.  The speed can change somewhat during the run, depending on
> array
> configurations, but not by orders of magnitude.
> 
> So you would actually be reading 380 G of data and writing the same
amount
> back again.  If we assume that your disks on average will do 10 MB/sec
> transfers
> (because some seeking will be involved, this *may* not be an
unreasonable
> estimate),
> then the total transfer would take around
>    2[read+write] * 380[GB] * 1024[MB/GB] / 10[MB/s] / 3600[s/hour] =
21.62
> [hours]
> 
> Perhaps raidreconf could do better than 48 hours.  I would guess that
most
> of
> the "missing 26 hours" are disk seek-time.  There are a few
sub-optimal
> algorithms in raidreconf as well - I don't know how if they even show
up
> on
> modern CPUs though.

I did another raidreconf test today, this time I used an array of two 30
gb partitions, and tried to add another 50 gb partition to it.
Raidreconf estimated it would take somewhat over 7 hours to complete. I
let it run to the end. I monitored the process through top/ps most of
the time. I'm not sure if this is accurate, but it never reported using
more than 6% of cpu time and about 7% of memory (i.e. about 15 mb) at
any time while it was running. Finally, it finished in 3h7m4s, which is
quite less than the estimate it was giving most of the time.
So, if it took 3 hours to add 50 gb to a 60 gb array, a rough
calculation says it will take about 19 hours to add 100 gb to a 380 gb
array, which is close to your calculations, yet still lower.
I should also note that in this test, all 3 partitions used were on the
same disk, so that should probably not speed up things, if not slow down
the process. I am really beginning to think (and hope) I can live with
that if it only takes about 20 hours to do it :) that is, until you grow
up and write that kernel patch that will allow raidreconf to work
online, on an active md0 :)

On another note, I also noticed I currently use a 32 kb chunk-size,
which is rather small for such a large array, while it also doesn't seem
very useful since we don't have such a large number of small files. So I
thought I should perhaps it would be wise to also convert to a 64 kb
chunk-size.
The question is a. is it safe to do such a conversion at the same time
as adding the new device to the array and b. do you think it may
severely affect the time it takes for raidreconf to finish?

After I write this, I will also try running raidreconf for a
chunk-resize on this now-110 gb array, see how long it takes and report
tomorrow.
That's all for now, thanks again.
Regards,
Cajoline Leblanc
cajoline at chaosengine dot de

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux