Re: Linux MD RAID 1 read performance tunning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 24, 2009 at 11:03 AM, Goswin von Brederlow
<goswin-v-b@xxxxxx> wrote:
> Keld Jørn Simonsen <keld@xxxxxxxxxx> writes:
>
>> On Tue, Dec 22, 2009 at 07:08:25PM +0200, Ciprian Dorin, Craciun wrote:
>>> 2009/12/22 Keld Jørn Simonsen <keld@xxxxxxxxxx>:
>>> > On Tue, Dec 22, 2009 at 06:34:55PM +0200, Ciprian Dorin, Craciun wrote:
>>> >>     Hello all!
>>> >>
>>> >>     I've created a 64G RAID 1 matrix from 3 real disks. (I intend to
>>> >> use this as a target for backups.)
>>> >>     Now while playing around with this array, I've observed that the
>>> >> read performance is quite low because it always reads from the disk in
>>> >> the first slot (which happens to be the slowest...)
>>> >>
>>> >>     So my questions are:
>>> >>     * is there any way to tell the MD driver to load-balance the reads
>>> >> between the three disks?
>>> >
>>> > It does not make sense to do distributed reading in raid1 for sequential
>>> > files. This is because it will not be faster to read from more drives,
>>> > as this will only make the reading from one drive skipping blocks on
>>> > that drive. In other words, in the time you use for skipping blocks on
>>> > one drive, you could just as well have read the blocks. So then better
>>> > just read all the blocks off one drive, and then do other possible IO
>>> > from other drives.
>>>
>>>     Aha. It makes sens now. But, does it mean that if I have parallel
>>> IO's (from different read operations) they are going to be distributed
>>> between the disks?
>>
>> It should, but I am not fully sure it does.
>> But try it out with two concurrent reads of two big files, and then
>> watch it with iostat
>>
>> Best regards
>> keld
>
> Actualy the kernel remembers the last read/write position for each
> raid1 component and then uses the one which is nearest.
>
> And when you read at 2/3 different positions at the same time then it
> will use different components for each an use the same ones for
> subsequent reads (as they will be nearer).
>
> Try
>
> dd if=/dev/md0 of=/dev/null bs=1M cunt=1024 skip=0 &
> dd if=/dev/md0 of=/dev/null bs=1M cunt=1024 skip=1024 &
> dd if=/dev/md0 of=/dev/null bs=1M cunt=1024 skip=2048
>
> They should more or less get the same speed as a single dd.
>
> MfG
>        Goswin


    Thanks all for your feedback. (I haven't tried the proposed three
dd's in parallel, but I promise I'll try them the next time I assemble
my backup array.)

    But one observation though:
    * indeed my usage of the array was mono-process;
    * when reading from the array to construct the MD5 sums for the
files I've used only one process;
    * indeed the data was read from a single disk (at a time);
    * but now the interesting think comes: I think it favored one disk
(the same most of the time) over the others;

    Is this as expected?

    Thanks again,
    Ciprian.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux