Re: single cpu thread performance limit?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Quick update with the XFS tests suggested (although a FS is still
probably not a real option at teh moment for me)

This rig only has 4 x Flash (2 MLC and 2 SLC).....  125K IOPS each for
MLC - 165K each for SLC.

Create linear RAID and XFS with ag=4

Mount as suggested and create 4 test folders.....

If i test individually - we get 99.9% of the IOPS (ie. 125 for first 2
AG's and 165 for last 2).  which is great news and means that the AG
does what it should.

But if a run the test over all 4, then we see it peak at aroudn 320K
IOPS.  Interstingly each AG = 80K IOPS and as we can see above this is
need not be the case, as the CPU load is not having any issues - i am
presuming that this could be a simple XFS limit maybe.


More testing with many R1's and R0's on top seem to suggest that R0 is
losing around 20-25% of the IOPS.  (R1 around 5%).  I have tried with
LVM strip and much the same.






On Thu, Aug 11, 2011 at 7:58 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
> On 8/11/2011 10:58 AM, mark delfman wrote:
>> I seem to have hit a significant hard stop in MD RAID1/10 performance
>> which seems to be linked to a single CPU thread.
>
> What is the name of the kernel thread that is peaking your cores?  Could
> the device driver be eating the CPU and not the md kernel threads?  Is
> it both?  Is it a different thread?  How much CPU is the IO generator
> app eating?
>
> What Linux kernel version are you running?  Which Linux distribution?
> What application are you using to generate the IO load?  Does it work at
> the raw device/partition level or at the file level?
>
>> I am using extremely high speed (IOPS) internal block devices – 8 in
>> total.  They are capable of achieving > 1million iops.
>
> 8 solid state drives of one model or another, probably occupying 8 PCIe
> slots.  IBIS, VeloDrive, the LSI SSD, or other PCIe based SSD?  Or are
> these plain SATA II SSDs that *claim* to have 125K 4KB random IOPS
> performance?
>
>> However if I use RAID1 / 10 then MD seems to use a single thread which
>> will reach 100% CPU utilisation (single core) at around 200K IOPS.
>> Limiting the entire performance to around 200K.
>
> CPU frequency?  How many sockets?  Total cores?  Whose box?  HP, Dell,
> IBM, whitebox, self built?  If the latter two, whose motherboard?  How
> many PCIe slots are occupied by the SSD cards?
>
>> If I use say 4 x RAID1 / 10’s and a RAID0 on top – I see not much
>> greater results. (although the theory seems to say I should and there
>> are now 4 CPU threads running, it still seems to hit 4 x 100% at maybe
>> 350K).
>
> Assuming you have 4 processors (cores), then yes, you should see better
> scaling.  If you have less cores than threads, then no.  Do you see more
> IOPS before running out of CPU when writing vs reading?  You should as
> you're doing half the IOs when reading.
>
>> Is there any way to increase the number of threads per RAID set? Or
>> any other suggestions on configurations?  (I have tried every
>> permutation of R0+R1/10’s)
>
> The answer to the first question AFAIK is no.  Do you have the same
> problem with a single --linear array?  What is the result when putting a
> filesystem on each individual drive?  Do you get your 1 million IOPS?
>
> Is MSI enabled and verified to be working for each PCIe SSD device?  See:
>
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/PCI/MSI-HOWTO.txt;hb=HEAD
>
> --
> Stan
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux