Re: about linear and about RAID10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again, all --

...and then Roger Heflin said...
% You do not want to stripe 2 partitions on a single disk, you want that linear.
% 
...
% 
% do a dd if=/dev/mdXX of=/dev/null bs=1M count=100 iflag=direct  on one
% of the raid5s of the partitions and then on the raid1 device over
% them.  I would expect the raid device over them to be much slower, I
% am not sure how much but 5x-20x.

Note that we aren't talking RAID5 but simple RAID1, but I follow you.
Time for more testing.  I ran the same dd tests as on the RAID5 setup

  jpo:~ # for D in 41 40 ; do for C in 128 256 512 ; do for S in 1M 4M 16M ; do CMD="dd if=/dev/md$D of=/dev/null bs=$S count=$C iflag=direct" ; echo "## $CMD" ; $CMD 2>&1 | egrep -v records ; done ; done ; done
  ## dd if=/dev/md41 of=/dev/null bs=1M count=128 iflag=direct
  134217728 bytes (134 MB, 128 MiB) copied, 0.710608 s, 189 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=128 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 2.7903 s, 192 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=128 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.3205 s, 190 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=1M count=256 iflag=direct
  268435456 bytes (268 MB, 256 MiB) copied, 1.41372 s, 190 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=256 iflag=direct
  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.50616 s, 195 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=256 iflag=direct
  4294967296 bytes (4.3 GB, 4.0 GiB) copied, 22.7846 s, 189 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=1M count=512 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 3.02753 s, 177 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=4M count=512 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 11.2099 s, 192 MB/s
  ## dd if=/dev/md41 of=/dev/null bs=16M count=512 iflag=direct
  8589934592 bytes (8.6 GB, 8.0 GiB) copied, 45.5623 s, 189 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=128 iflag=direct
  134217728 bytes (134 MB, 128 MiB) copied, 1.19657 s, 112 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=128 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 4.32003 s, 124 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=128 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 12.0615 s, 178 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=256 iflag=direct
  268435456 bytes (268 MB, 256 MiB) copied, 2.38074 s, 113 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=256 iflag=direct
  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.62803 s, 124 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=256 iflag=direct
  4294967296 bytes (4.3 GB, 4.0 GiB) copied, 25.2467 s, 170 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=1M count=512 iflag=direct
  536870912 bytes (537 MB, 512 MiB) copied, 5.13948 s, 104 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=4M count=512 iflag=direct
  2147483648 bytes (2.1 GB, 2.0 GiB) copied, 16.5954 s, 129 MB/s
  ## dd if=/dev/md40 of=/dev/null bs=16M count=512 iflag=direct
  8589934592 bytes (8.6 GB, 8.0 GiB) copied, 55.5721 s, 155 MB/s

and did the math again

          1M        4M       16M
      +---------+---------+---------+
  128 | 189/112 | 192/124 | 190/178 |
      | (1.68)  | (1.54)  | (1.06)  |
      +---------+---------+---------+
  256 | 190/113 | 195/124 | 189/170 |
      | (1.68)  | (1.57)  | (1.11)  |
      +---------+---------+---------+
  512 | 177/104 | 192/129 | 189/155 |
      | (1.70)  | (1.48)  | (1.21)  |
      +---------+---------+---------+

and ... that was NOT what I expected!  I wonder if it's because of stripe
versus linear again.  A straight mirror will run down the entire disk,
so there's no speedup; if you have to seek from one end to the other, the
head moves the whole way.  By mirroring two halves and swapping them and
then gluing them together, though, a read *should* only have to hit the
first half of either disk and thus be FASTER.  And maybe that's the case
for random versus sequential reads; I dunno.  The difference was nearly
negligible for large reads, but I get a 40% penalty on small reads -- and
this server leans much more toward small files versus large.  Bummer :-(

I don't at this time have a device free to plug in locally to back up the
volume to destroy and rebuild as linear, so that will have to wait.  When
I do get that chance, though, will that help me get to the awesome goal
of actually INCREASING performance by including a RAID0 layer?


Thanks again & HAND

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux