Re: Fwd: Adding a new drive to an array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon Jun 23, 2014 at 04:47:07pm +0200, George Duffield wrote:

> > Consider adding two drives and go to RAID6.
> >
> > http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
> 
> Isn't what you're really saying then is that it's best to abandon RAID
> altogether when dealing with large storage requirements utilising high
> capacity drives?  Filling each drive individually and backing it up to
> another drive presumably involves a lot less iro r/w operations and if
> one fails you simply replace it with another of the same or larger
> size and be done with it.  Just make sure you have current backups
> (which you need to have even if using RAID).  The only real downside
> then is the inability to deal with the storage as a consolidated whole
> (I'm guessing LVM would be just as problematic as RAID if a drive in a
> volume fails)?
> 
Any parity RAID certainly starts to run into issues in this area. RAID-6
is significantly safer than RAID-5 but will also reach a point where
it's no longer statistically viable. Higher parity options could be used
then - they're not currently supported by md, but there's been some ideas
floated over the last year or so. Non-parity RAID options (e.g. RAID-10)
are far safer in this respect but have a higher storage overhead.

Manual duplication is unlikely to offer any practical advantages over
RAID-10 though (or layered RAID-1/RAID-0 if you prefer more control over
the layout).

> >> Looking at https://raid.wiki.kernel.org/index.php/Growing it seems the
> >> approach (after partitioning the drive) is to:
> >>
> >> 1) add the drive to the pool: # mdadm --add /dev/md127 /dev/sdX1
> >> 2) grow the array: # mdadm --grow --raid-devices=5
> >> --backup-file=~/grow_md127.backup  /dev/md127
> >
> >
> > Make sure the backup file is NOT on a filesystem that is on the md being
> > resized. Put it on /root or something instead.
> >
> >
> >> 3) edit mdadm.conf to include the 5th drive i.e. num-devices=5
> >> 4) determine raid stride size calculated with chunk / block
> >> 4) ensure the array is unmounted and resize ext4: # resize2fs -S
> >> ascertained_stride_size -p /dev/md127
> >
> >
> > Yeah, that looks about right. Don't know if you really need step 4, just
> > enmount and resize, I would imagine that if you have a reasonable resent
> > version of the tools it'll figure out the stride size automatically.
> >
> > You cannot avoid the resync, so stop looking into weird options. The
> > resizing will take days.
> >
> 
> What are weird options for if not to be used ;-)
> 
> What is the purpose of --assume-clean then when man states:
> When  an  array  is resized to a larger size with --grow --size= the
> new space is normally resynced in that  same  way  that  the whole
> array  is  resynced at creation.  From Linux version 3.0,
> --assume-clean can be used with that command to avoid the  automatic
> resync.
> 
Well, --grow --size= is usually for growing to larger disks (but the same
number). In this case, if the added data is all zeroes on all disks (as
it might be at the end of a disk burn-in test, for example) then there's
no need to recalculate the parity (or mirror the data), and
--assume-clean allows you to shortcut that step.

In your case, you're adding a new disk so the data needs to be restriped
anyway. I doubt that the --assume-clean option would even work in this
case (I can't see any rational case where it would be useful anyway).

> ^^^ What is the automatic resync being referred to in man?
>
It's what it describes in the previous line:
    "the new space is normally resynced in that same way that the whole
     array is resynced at creation"

That's done automatically when you grow an array.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@xxxxxxxxxxxxxxx> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux