Re: Grow a RAID-10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Keld Jørn Simonsen wrote:
On Wed, Jul 02, 2008 at 06:47:31PM -0700, Daniel L. Miller wrote:
Keld Jørn Simonsen wrote:
On Wed, Jul 02, 2008 at 02:56:04PM -0700, Daniel L. Miller wrote:
I currently have a RAID10 across (4) SATA drives. It looks like I'm going to need to grow in the near future. Any tips for a procedure for this? My current plan:

1. Add a PCI SATA controller (MB had 4 SATA + 4 RAID SATA, it's a Tyan MB with a NFORCE chipset, I'm not sure if I want/can use the RAID SATA ports as plain SATA connections).
Why not use the mobo raid sata ports? They are probably faster than a
controller on the pci bus. What kind of pci bus do you have?

My mistake. Confused this one with another system. Only have 4 ports available. I did have the option of using the Nvidia RAID - which I did NOT enable.

Yes, it is fine not to use the two on-board raid controllers in raid
mode, but just to use SW raid on them. I have a similar mobo with 2
sata controllers and the ability to attach 8 sata drives, which I have
all been using to run SW raid, and I have not experienced any problems
yet with this setup.

I understand that your mobo has 4 onboard sata connections, and that
these are already in use for the current array.

What "kind" of pci bus? Don't understand the question. If it matters, it's a Tyan S2892, a "Thunder K8SE". nForce Pro2200 and AMD8131 PCI-X chipsets.

So it has both PCI-X bus and PCI-E bus. You want to attact 2 more drives and you need a sata controller. This could probably both be attached via
the PCI-X bus and the PCI-E bus. It seems like the PCI-X bus - with a
133 MHz possibility counld be the faster of the 2, but given you will
only have 2 more drives, both PCI-X and PCI-E are prossibilties.

PCI-E 1x is likely to be too slow for a 4-drive raid10,f2 array.
My 4-drive raid10,f2 delivers about 320 MB/s and newer disks should be
able to deliver 360 MB/s - well above the 250 MB/s that a PCI-E 1x can
deliver.
2. Add 2 more drives - not necessarily the same size as the existing (they were all 4 the same)

3.  Execute "mdadm --grow /dev/md0"
What kind of raid10 do you have?>
I don't understand this question either.

mdadm --detail /dev/md0
/dev/md0:
       Version : 00.90.03
 Creation Time : Tue Oct  3 19:11:53 2006
    Raid Level : raid10
    Array Size : 312581632 (298.10 GiB 320.08 GB)
 Used Dev Size : 156290816 (149.05 GiB 160.04 GB)
  Raid Devices : 4
 Total Devices : 4
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jul  2 18:46:15 2008
         State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
 Spare Devices : 0

        Layout : near=2, far=1
    Chunk Size : 32K

          UUID : 9d94b17b:f5fac31a:577c252b:0d4c4b2a
        Events : 0.10941692

   Number   Major   Minor   RaidDevice State
      0       8        0        0      active sync   /dev/sda
      1       8       16        1      active sync   /dev/sdb
      2       8       32        2      active sync   /dev/sdc
      3       8       48        3      active sync   /dev/sdd

I was takling about the layout, and you have a n2 layout (standard
raid10 - near=2). You may benefit from a raid10,f2 layout, as this has faster read capabilities, but I think it is not possible on the fly to
rearrange a raid10,n2 array to a raid10,f2 array.

Given that you have a raid10,n2 layout, the speeds of the busses are not
so important, as raid10,n2 cannot deliver that high performance. I would expect less than 100 MB/s coming out of your 2 extra disks.

What is the use for your raid? is it a database, a file server, a web
server or the like?

Best regards
keld

This is our all-in-one server. The raid is the primary storage for everything - day-to-day operations files, quickbooks data, virtual machines. O/S and programs are on a separate non-raid drive.
--
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux