Re: about linear and about RAID10 (was "Re: how do i fix these RAID5 arrays?")

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You do not want to stripe 2 partitions on a single disk, you want that linear.

With a stripe write on 4 striped partitions you get this.
Write a stripe for each disk on part1, then do a 8-10ms seek, then
write the next stripe on part2 and then seek back to part1 and repeat.

With Linear you get:
write a stripe, (there usually won't be any seek, and if there is it
will be a single track seek and much quicker), then write the next
stripe.

If the head data rate is 200MB/sec (ballpark typical for a disk) then
that 10ms could write 2MB of data.   So the larger the stripe size the
less you waste % wise on the seeks).  But if say the block per disk
is 256K, then that write takes around 1.25ms and the seek to the next
one takes 10ms so the seeks significantly reduce you read/write rates.

do a dd if=/dev/mdXX of=/dev/null bs=1M count=100 iflag=direct  on one
of the raid5s of the partitions and then on the raid1 device over
them.  I would expect the raid device over them to be much slower, I
am not sure how much but 5x-20x.

On Fri, Nov 25, 2022 at 7:36 AM David T-G <davidtg-robot@xxxxxxxxxxxxxxx> wrote:
>
> Wol, et al --
>
> ...and then Wol said...
> % On 24/11/2022 21:10, David T-G wrote:
> % > How is linear different from RAID0?  I took a quick look but don't quite
> % > know what I'm reading.  If that's better then, hey, I'd try it (or at
> % > least learn more).
> %
> % Linear tacks one drive on to the end of another. Raid-0 stripes across all
> % drives. Both effectively combine a bunch of drives into one big drive.
>
> Ahhhhh...  I gotcha.  Thanks.
>
>
> %
> ...
> %
> % That's why there's raid-10. Note that outside of Linux (and often inside)
> % when people say "raid-10" they actually mean "raid 1+0". That's two striped
> % raid-0's, mirrored.
>
> That's basically what I have on the web server:
>
>   jpo:~ # mdadm -D /dev/md41 | egrep '/dev|Level'
>   /dev/md41:
>           Raid Level : raid1
>          0       8       17        0      active sync   /dev/sdb1
>          1       8       34        1      active sync   /dev/sdc2
>   jpo:~ # mdadm -D /dev/md42 | egrep '/dev|Level'
>   /dev/md42:
>           Raid Level : raid1
>          0       8       18        0      active sync   /dev/sdb2
>          1       8       33        1      active sync   /dev/sdc1
>   jpo:~ # mdadm -D /dev/md40 | egrep '/dev|Level'
>   /dev/md40:
>           Raid Level : raid0
>          0       9       41        0      active sync   /dev/md/md41
>          1       9       42        1      active sync   /dev/md/md42
>   jpo:~ #
>   jpo:~ #
>   jpo:~ # parted /dev/sdb p
>   Model: ATA ST4000VN008-2DR1 (scsi)
>   Disk /dev/sdb: 4001GB
>   Sector size (logical/physical): 512B/4096B
>   Partition Table: gpt
>   Disk Flags:
>
>   Number  Start   End     Size    File system  Name                    Flags
>    1      1049kB  2000GB  2000GB               Raid1-1
>    2      2000GB  4001GB  2000GB               Raid1-2
>    4      4001GB  4001GB  860kB   ext2         Seag4000-ZDHB2X37-ext2
>
>   jpo:~ # parted /dev/sdc p
>   Model: ATA ST4000VN008-2DR1 (scsi)
>   Disk /dev/sdc: 4001GB
>   Sector size (logical/physical): 512B/4096B
>   Partition Table: gpt
>   Disk Flags:
>
>   Number  Start   End     Size    File system  Name                    Flags
>    1      1049kB  2000GB  2000GB               Raid1-2
>    2      2000GB  4001GB  2000GB               Raid1-1
>    4      4001GB  4001GB  860kB                Seag4000-ZDHBKZTG-ext2
>
>
> %
> ...
> %
> % Either version (10, or 1+0), gives you get the speed of striping, and the
> % safety of a mirror. 10, however, can use an odd number of disks, and disks
> % of random sizes.
>
> That's still magic to me :-)  Mirroring (but not doubling up the
> redundancy) on an odd number of disks?!?
>
>
> %
> % Cheers,
> % Wol
>
>
> HAND
>
> :-D
> --
> David T-G
> See http://justpickone.org/davidtg/email/
> See http://justpickone.org/davidtg/tofu.txt
>



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux