Re: [CentOS] Shrinking a volume group

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Quoting Steve Bergman <steve@xxxxxxxx>:

OK.  Now I'm a bit confused.  Raid 1 read performance is not what I
expected.

Note that hdparm reports the raw speed of the drive. In real world, you will at least have overhead of the file system. Possibly also overhead of LVM and md device drivers (if you use them).

libata in current stable kernels doesn't support NCQ. The support for NCQ will be added in 2.6.18 (currently at rc7 level). Unless Red Hat bumps the kernel version in final release of RHEL5 or backports NCQ to 2.6.17 (I wouldn't bet on backporting, there were some major changes in libata), you are not going to see it in forthcomming RHEL5 either (beta 1 uses 2.6.17 kernel).

If you run only a single process, md device driver will read from disks in round robin fashion. You can even observe this visually if your hard drives have separate LEDs. Only one of them will be active at any point in time during sequential read test. It's not smart enough to stripe reads on RAID1. I'm not sure if this is due to lack of NCQ support and how much (if) it will help once NCQ support is added to the linux kernel. I didn't have any spare SCSI system to test how things work there.

However, if you run two processes, md driver will do reads from different drives in parallel. Again, you can observe this visually if your drives have individual LEDs (both will be lit).

I've run couple of benchmarks (using bonnie++), that also show this numerically. For "one drive test" I simply detached second disk from the mirror, so the overhead of drivers (md+lvm+ext3) is about the same. The numbers for "two processes" tests are for single process (multiply by two to get total throughput).

Test                                seq write (kB/s)    seq read (kB/s)
=======================================================================
raid1, single process                   37716              47330
raid1, two processes (each)             16570              31572
degraded raid1, single process          39076              47627
degraded raid1, two processes (each)    16368               6759

Writing to single drive (degraded RAID1 in this case) is a bit faster than writing to RAID1, since there's no need to wait for data to be written to both drives.

If two processes are writing to the same disk, it's about the same. Note that 16.5MB/s is per process (total for disk is 33MB/s). So we have 33MB/s vs. 37MB/s). I'd expect bigger difference due to all the extra disk seeks. So this result is actually very good.

You can see effect of md driver not striping reads on RAID1. Almost the same speed (47MB/s) for RAID1 and degraded-RAID1 (single drive) case.

On the other hand, if there are two processes reading in parallel, each is able to read 31.5 MB/s, which totals to 63MB/s. Much better. It's still not double the speed (those two processes are fighting for system resources after all, not only disks but also CPU time).

Two processes reading from degraded RAID-1 clearly sucks. Total throughput drops to aruond 13-ish MB/s. I've no good explanation for such a low number (it is less than a half of the write throughput).

--
NOTICE: If you are not intended recipient, you are hereby notified
that by reading this message you agreed not to disturb frogs during
mating season.  For more info, visit http://www.8-P.ca/

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux