Re: Raid 0+1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30.11.2012 16:29, Lars Marowsky-Bree wrote:
> On 2012-11-30T15:27:56, Sebastian Riemer <sebastian.riemer@xxxxxxxxxxxxxxxx> wrote:
> 
>> Yes, it is possible but it only makes sense if you want to mirror to
>> another server as most people know that the alternative DRBD is too slow
>> for serious storage requirements.
>>
>> Create the RAID-0 first, then take your RAID-0 device and e.g. an iSCSI
>> device from another storage server with the same setup and create a
>> RAID-1 over them. Then, you've got your stacked MD layers.
>>
>> With the flag write-mostly you can even tell the read balancing that the
>> remote device is slower than the local one.
> 
> That is somewhat orthogonal to the original discussion, but in which
> benchmarks is this approach faster than DRBD - aren't the bottlenecks
> still the spindle and the network IO?
> 

Hi Lars,

just "blktrace" DRBD while doing a file copy with at least 512 KiB
read-ahead. Power off the secondary and "blktrace" again.

Here is what you'll see:
DRBD uses 128 KiB hashing functions. You can never get bigger IOs than
that - bad for big sequential stuff.

In the second test you'll see that DRBD has a dynamic IO request size
detection. It always starts with 4 KiB limits. If you loose connection
to the other host even your local IO is limited to 4 KiB. Sorry, but
this is crap.

There are lots of other performance related bugs in DRBD. If you run it
in a virtual data center, then you'll see 4 KiB IOs while syncing
because they use the blk limits as signed instead of unsigned and KVM
initializes them as "-1U". They've fixed that one in 8.3.14 and 8.4.2.

Furthermore, there are lots of performance issues that you see clearly
if you use a fast transport like QDR InfiniBand. We had ridiculous
performance with that. DRBD introduces lots of latency.

With SRP transport things are much better. Put MD RAID-1 on top and this
is nice! If you've got both rdevs as remote storage you can even have
symmetric (both rdevs the same) latency with MD RAID-1.

The write-intent bitmap of MD is really sophisticated!

Cheers,
Sebastian


-- 
Sebastian Riemer
Linux Kernel Developer - Storage

We are looking for (SENIOR) LINUX KERNEL DEVELOPERS!

ProfitBricks GmbH • Greifswalder Str. 207 • 10405 Berlin, Germany
www.profitbricks.com • sebastian.riemer@xxxxxxxxxxxxxxxx

Sitz der Gesellschaft: Berlin
Registergericht: Amtsgericht Charlottenburg, HRB 125506 B
Geschäftsführer: Andreas Gauger, Achim Weiss
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux