Re: raidreconf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 06, 2002 at 02:11:08AM +0200, Cajoline wrote:
...
> So this means I have to maintain the device sequence in the new
> arrangement, right? I hope this doesn't sound too naive, but how will
> the raid layer know the new location of each partition?

No - All superblocks on the partitions participating in some specific array
will have a "unique ID".   This will identify all superblocks belonging to
the same array, among the possible myriad of partitions with superblocks
you may have on your system (with many arrays).

Each superblock also contains information like "I'm disk 0 in this array", 
or "I'm disk 3 in this array"  -  whether the kernel thinks the partition
name is /dev/hda5 or /dev/hde5 is of no importance to the RAID layer.

Shuffle the disks around as you please (as long as your arrays use persistent
superblocks!)

...
> I decided to do a test tonight with a few drives I could repartition
> just for this. I used two drives, 80gb and 100gb, made two equal-size
> partitions on each, and made an array with the 2 40gb and the 1 50gb
> partition. I copied something to it, filled about 1.5gb, and then tried
> raidreconf to add the second 50gb partition to the array. The box had
> 224mb RAM and a Duron 850 MHz processor.

Ok, that's a decent portion of memory.

> It gave an estimate of about 12 hours to complete the process. I let it
> continue for about 2 hours before I aborted, and it hadn't reached 20%
> yet. I am not sure how much memory it used, unfortunately I just didn't
> look.

raidreconf will use the majority of memory for it's own "gift buffer",
but leave some for the kernel to use (for buffers/caches) as it sees fit.

One thing is certain: all memory is going to be used for *some* purpose
during a raidreconf run.

> 
> Is this estimation accurate or close to accurate? So could I assume that
> adding a 100gb disk to a 380gb array would take a multiple of that time
> to complete, perhaps more than 48 hours?

The estimate needs a little time to "stabilize", but it should be a fair
estimate.  The speed can change somewhat during the run, depending on array
configurations, but not by orders of magnitude.

So you would actually be reading 380 G of data and writing the same amount
back again.  If we assume that your disks on average will do 10 MB/sec transfers
(because some seeking will be involved, this *may* not be an unreasonable estimate),
then the total transfer would take around
   2[read+write] * 380[GB] * 1024[MB/GB] / 10[MB/s] / 3600[s/hour] = 21.62 [hours]

Perhaps raidreconf could do better than 48 hours.  I would guess that most of
the "missing 26 hours" are disk seek-time.  There are a few sub-optimal
algorithms in raidreconf as well - I don't know how if they even show up on
modern CPUs though.

...
> I suppose I can't make a RAID-0 array that consists of the existing
> array and the new disk and still preserve the existing filesystem, since
> the striping might overwrite the fs, right?

With raidreconf  ;)    But no, not otherwise.

> Can you be a little more specific as to what I should take special care
> for in this process? I am not sure I got it right, but I don't know
> if/how the resize utility will be able find the filesystem that was
> running on top of the old md0 under the new linear-raid device.

Take very special care that your existing array (where your data lives) is
raid-disk 0 in the new linear array.  And make the new disk (about to be
overwritten) raid-disk 1.   Double-check you're making a linear array  ;)

Oh, and you can't use persistent superblocks...  Those are put in the last few
blocks of each participating partition (or, in your case, array), and you most
certainly do not want to have anything put on your existing array.

Then, mkraid the new array.  Don't worry, with LINEAR and NO PERSISTENT
SUPERBLOCK none of the underlying partitions will be touched.

Now, ext2resize your new array.  The utility should happily find the filesystem
on your linear array and extend it to spend the whole array.

...
> 
> Yes I know, it's one of the reasons I hesitate to use raidreconf, along
> with the other problem, that I'm not sure I can keep the box offline
> (with the raid stopped) long enough for raidreconf to finish its' job.

Once I'm old and rich and have more time on my hands, I'll put raidreconf
in the kernel to allow for hot-reconfiguration.

Say, in about 50 years.   ;)

> 
> Thank you again. This has been a big help for me to decide what to do in
> this situation.


No problem,  I appreciate the feedback,

Keep backups  ;)

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux