Re: The huge different performance of sequential read between RAID0 and RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu Jan 28, 2010 at 09:55:05AM -0500, Yuehai Xu wrote:

> 2010/1/28 Gabor Gombas <gombasg@xxxxxxxxx>:
> > On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
> >
> >> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
> >> >>       631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
> > [...]
> >
> >> I don't think any of my drive fail because there is no "F" in my
> >> /proc/mdstat output
> >
> > It's not failed, it's simply missing. Either it was unavailable when the
> > array was assembled, or you've explicitely created/assembled the array
> > with a missing drive.
> 
> I noticed that, thanks! Is it usual that at the beginning of each
> setup, there is one missing drive?
> 
Yes - in order to make the array available as quickly as possible, it is
initially created as a degraded array.  The recovery is then run to
add in the extra disk.  Otherwise all disks would need to be written
before the array became available.

> >
> >> How do you know my RAID5 array has one drive missing?
> >
> > Look at the above output: there are just 6 of the 7 drives available,
> > and the underscore also means a missing drive.
> >
> >> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
> >> recovery has always been done.
> >
> > Of course.
> >
> >> However, if I format my md0 with such command:
> >> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
> >> performance for RAID5 becomes usual, at about 200~300M/s.
> >
> > I suppose in that case you had all the disks present in the array.
> 
> Yes, I did my test after the recovery, in that case, does the "missing
> drive" hurt the performance?
> 
If you had a missing drive in the array when running the test, then this
would definitely affect the performance (as the array would need to do
parity calculations for most stripes).  However, as you've not actually
given the /proc/mdstat output for the array post-recovery then I don't
know whether or not this was the case.

Generally, I wouldn't expect the RAID5 array to be that much slower than
a RAID0.  You'd best check that the various parameters (chunk size,
stripe cache size, readahead, etc) are the same for both arrays, as
these can have a major impact on performance.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@xxxxxxxxxxxxxxx> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

Attachment: pgpMLQpK1ps42.pgp
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux