Re: MAX IOPS - s/w scaling issues?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Keep an eye on CPU and RAM utilization. You may be hitting a bottleneck there.

Have you tried changing the queue depth of each SSD?
This may be of interest to you: http://linux-raid.osdl.org/index.php/Performance

On Tue, Dec 1, 2009 at 5:24 AM, Arun Jagatheesan
<arun.jagatheesan@xxxxxxxxx> wrote:
>
> We are trying some experiments on our linux raid. We are having bottleneck in scaling the RAID after ~250K IOPS.  Is there a known issue of s/w max limit or is it simply an issue of configuration?
>
> We have (16 x 64G) SSDs on a RAID0. Our IOPS on this RAID0 stops at 250K (even with lesser or more SSD drives).    Whereas we get 540K IOPS with the same number of SSDs (RAW no RAID). We find a linear scale-up for each SSD we add to RAID0 until we reach ~250K IOPS limit. After the RAID0 reaches this limit, its just a straight line plateau with no increase in performance.
>
> Is there any known issue of the maximum IOPS the s/w can handle?
>
> Cheers,
> Arun
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
      Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux