Re: raind-1 resync speed slow down to 50% by the time it finishes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 30, 2009 at 01:11:20PM -0700, David Rees wrote:
> 2009/7/30 Keld Jørn Simonsen <keld@xxxxxxxx>:
> > I think raid10,f2 only degrades 10-20 % while raid1 can degrade as much
> > as 50 %. For writing it is about the same, given that you use a file
> > system on top of the raid.
> 
> Has anyone done any benchmarks of near vs far setups?

Yes, there are a number of benchmarks on raid10 near/far scenarios
at http://linux-raid.osdl.org/index.php/Performance

> >From what I understand, here's how performance should go for a 2-disk
> raid10 setup:
> 
> Streaming/large reads far: Up to 100% faster since reads are striped
> across both disks

and possibly faster, due to far only using the faster half of the disk
for reading.

> Streaming/large reads near: Same as single disk as reads can't be
> striped across both disks

yes.

> Streaming/large writes far: Slower than single disk, since disks have
> to seek to write.  How much of a hit in performance will depend on
> chunk size.
> Streaming/large writes near: Same as single disk.

Due to the elevator of the file system, writes are about the same for
both near and far.

> Random/small reads far: Up to 100% faster

Actually a bit more, due to that far only uses the fastest half of the
disks. One test shows 132 % faster, which is consistent with theory.

> Random/small reads near: Up to 100% faster

One test shows 156 % faster.

> Random/small writes far: Same as single disk.
> Random/small writes near: Same as single disk.

yes.

> So basically, if you have a setup which mostly reads from disk, using
> a far layout is beneficial, but if you have a setup which does a
> higher percentage of writes, sticking to a near layout will be faster.

For reading, this is true, but for writing, it is not true, given that
you use a filesystem, with an elevator algorithm in use. The elevator
evens out the lesser performance of layout=far for a raw raid10,f2, so
that the performance is about the same for the near and far layouts.

> I recently set up an 8-disk RAID10 across 8 7200 disks across 3 controllers.
> 
> 5 disks are in an external enclosure via eSATA and a PCIe card.
> 2 disks are using onboard SATA controller
> 1 disk is using onboard IDE controller
> 
> I debated whether or not to use near or far, but ultimately stuck with
> near for two reasons:
> 
> 1. The array mostly sees write activity, streaming reads aren't that common.
> 2. I can only get about 120 MB/s out of the external enclosure because
> of the PCIe card [1] , so being able to stripe reads wouldn't help get
> any extra performance out of those disks.

Hmm, a pci-e x1 should be able to get 2.5 Mbit/s = about 300 MB/s. 
Wikipedia says 250 MB/s. It is strange that you only can get 120 MB/s.
That is the speed of a PCI 32 bit bus. I looked at your reference [1]
for the 3132 model. Have you tried it out in practice?

The max you should be able to get out of your raid10 with 8 disks would
then be around 400 - 480 MB/s, for sequential reads. 250 MB/s out of your PCIE
enclosure, or 50 MB/s per disk, and then additional 50 MB/s each of the last
3 disks. You can only multiply the speed of the slowest of the disks
involved by the number of disks. But even then it is not so bad.
For random read it is better yet, given that this is not limited by the
transfer speed of your PCIe controller.

> -Dave
> 
> [1] http://ata.wiki.kernel.org/index.php/Hardware,_driver_status#Silicon_Image_3124
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux