Re: RAID 5 performance issue.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, 5 Oct 2007, Andrew Clayton wrote:

On Fri, 5 Oct 2007 06:25:20 -0400 (EDT), Justin Piszcz wrote:


So you have 3 SATA 1 disks:

Yeah, 3 of them in the array, there is a fourth standalone disk which
contains the root fs from which the system boots..

http://digital-domain.net/kernel/sw-raid5-issue/mdadm-D

Do you compile your own kernel or use the distribution's kernel?

Compile my own.

What does cat /proc/interrupts say? This is important to see if your
disk controller(s) are sharing IRQs with other devices.

$ cat /proc/interrupts
          CPU0       CPU1
 0:     132052  249369403   IO-APIC-edge      timer
 1:        202         52   IO-APIC-edge      i8042
 8:          0          1   IO-APIC-edge      rtc
 9:          0          0   IO-APIC-fasteoi   acpi
14:      11483        172   IO-APIC-edge      ide0
16:   18041195    4798850   IO-APIC-fasteoi   sata_sil24
18:   86068930         27   IO-APIC-fasteoi   eth0
19:   16127662    2138177   IO-APIC-fasteoi   sata_sil, ohci_hcd:usb1, ohci_hcd:usb2
NMI:          0          0
LOC:  249368914  249368949
ERR:          0


sata_sil24 contains the raid array, sata_sil the root fs disk


Also note with only 3 disks in a RAID-5 you will not get stellar
performance, but regardless, it should not be 'hanging' as you have
mentioned.  Just out of sheer curiosity have you tried the AS
scheduler? CFQ is supposed to be better for multi-user performance
but I would be highly interested if you used the AS scheduler-- would
that change the 'hanging' problem you are noticing?  I would give it
a shot, also try the deadline and noop.

I did try them briefly. I'll have another go.

You probably want to keep the nr_requessts to 128, the
stripe_cache_size to 8mb.  The stripe size of 256k is probably
optimal.

OK.

Did you also re-mount the XFS partition with the default mount
options (or just take the sunit and swidth)?

The /etc/fstab entry for the raid array is currently:

/dev/md0                /home                   xfs
noatime,logbufs=8 1 2

and mount says

/dev/md0 on /home type xfs (rw,noatime,logbufs=8)

and /proc/mounts

/dev/md0 /home xfs rw,noatime,logbufs=8,sunit=512,swidth=1024 0 0

So I guess mount or the kernel is setting the sunit and swidth values.

Justin.


Andrew


The mount options are from when the filesystem was made for sunit/swidth I believe.

       -N     Causes the file system parameters  to  be  printed  out  without
              really creating the file system.

You should be able to run mkfs.xfs -N /dev/md0 to get that information.

/dev/md3        /r1             xfs    noatime,nodiratime,logbufs=8,logbsize=262144 0 1

Try using the following options and the AS scheduler and let me know if you still notice any 'hangs'

Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux