Re: Linux Raid performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks to everyone for sharing their experiences...this was helpful.

I don't have luxury to have 16 SAS/SATA drives...I guess it would be
great if someone can share results with even higher number of
disks...I would like to know what is the max performance that can be
reached...I understand the theoretical is more like a couple of
Terabytes...but won't we get into some sort of linux file system
related or other kernel bottlenecks as we increase the # of disks?

On Fri, Apr 2, 2010 at 6:37 PM, Richard Scobie <richard@xxxxxxxxxxx> wrote:
> Mark Knecht wrote:
>
>> Richard,
>>    Good point. I was limited in my thinking to the sorts of arrays I
>> might use at home being no wider than 3, 4 or 5 disks. However for our
>> N-wide array as N approaches infinity so do the cycles required to run
>> it. I don think that applies to the OP but I don't know that.
>>
>
> I said I thought the busiest CPU was the parity generation one, but in
> hindsight this cannot be correct, as it was almost maxed out at half the
> write speed the array achieved when it was empty.
>
> Regards,
>
> Richard
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux