Re: lio taget iscsi multiple core performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2013-10-09 at 19:30 -0500, Xianghua Xiao wrote:
> On Tue, Oct 8, 2013 at 7:47 PM, Xianghua Xiao <xiaoxianghua@xxxxxxxxx> wrote:

<SNIP>

> I don't have everything yet, but I did apply your patch above, and I
> also installed another HBA/4-SSDs and am now running 24 Endpoints, in
> the hope that I can bind one IRQ for each endpoint to one of the 24
> virtual cores(12 cores * 2 via hyperthreading).
> 
> perf top -t 4 (4 is the process ID of kworker/0) gave me a blank
> output for some reason, I turned on :
> ++CONFIG_PROFILING=y
> ++CONFIG_TRACEPOINTS=y
> ++CONFIG_EVENT_TRACING=y
> ++CONFIG_TRACING=y
> ++CONFIG_BLK_DEV_IO_TRACE=y
> not sure if I need turn more profiling options in menuconfig, I have
> not used perf in the past on PPC.
> 

CONFIG_PERF_EVENTS=y is the item that needs to be enabled..

> With 2 HBA/8 SSDs/24 Endpoints via iscsi/iblock, and 24 interrupts to
> 24 cores, I now have READ the same performance(500MB/s) and WRITE is
> increased from 300MB/s to 415MB/s. Again your patch is applied.
> 

A reasonable improvement on WRITEs for changing the completion workqueue
to be unbounded..

> I noticed for READ, still CPU0 is 0% idle and the rest 23 cores are
> nearly 100% idle. For WRITE, all cores are busy, something like 5%
> idle. I'm running deadline IOSCHED for the test, as that's the one I
> used with SCST.

Ok, with the unbounded workqueue patch in place, this would likely
indicate that CPU0 is spending most of it's time processing interrupts
for the mpt2sas driver.

This makes sense because md raid will be generating lots of I/Os across
the 12 partitions that comprise the 12 software raids.

> 
> Will run fio tests later on.
> 
> Will FILEIO provide better performance comparing to IBLOCK? assuming
> FILEIO can leverage the filesystem caching? With SCST FILEIO provides
> better performance on iscsi, will test that later.
> 

Does that mean your SCST performance results where using FILEIO..?  If
so, then your already using writeback mode where incoming WRITEs go into
the buffer cache (memory) first and immediately acknowledge to the
client, while actual data blocks are written out in the background.

So yes, you'll see better performance with buffered FILEIO (needs to be
enabled at targetcli /backstores/fileio/$FILEIO_DEV creation time) where
writes hit memory first and are immediately acknowledged back to the
client.

The downside with this approach (regardless of target implementation) is
that in the event of a power failure, all data written to the buffer
cache, but not yet written down to disk will be lost.

--nab

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux