Re: lio taget iscsi multiple core performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 14, 2013 at 1:17 PM, Nicholas A. Bellinger
<nab@xxxxxxxxxxxxxxx> wrote:
> On Fri, 2013-10-11 at 21:04 -0500, xxiao wrote:
>> On Fri, Oct 11, 2013 at 5:28 PM, Nicholas A. Bellinger
>> <nab@xxxxxxxxxxxxxxx> wrote:
>> > On Thu, 2013-10-10 at 18:33 -0500, Xianghua Xiao wrote:
>> >>
>> >
>> > <SNIP>
>> >
>> >> I switched to fileio and now READ is at full speed, i.e. wirespeed of
>> >> 10Gbps, plus all cores are being used(though still only core0 takes
>> >> the MSI-X interrupts). However, WRITE slows down dramatically from
>> >> 415MB/s to 150MB/s, that really puzzles me.
>> >>
>> >
>> > From the Mode: O_DSYNC output below, the FILEIO device is *not* running
>> > in buffered I/O mode for WRITEs.
>> >
>> > Also, can you confirm using iostat -xm that the READs are coming from
>> > the backend device, and not from the buffer cache itself..?
>> >
>> > FYI, iostat -xm output also shosw the size of the I/Os being submitted
>> > to both the MD RAID devices, as well as to the underlying mpt2sas SCSI
>> > devices.  This would be useful to see if the size of I/Os is different
>> > between backend drivers.
>> >
>> >>
>> >> I had the same setting, including :
>> >> echo 1 > /sys/kernel/config/target/core/fileio_$i/my_fileio
>> >> $i/attrib/emulate_write_cache for all endpoints.
>> >> At WRITE all cores are equally involved, they're just not fully loaded
>> >> at all.
>> >>
>> >
>> > Setting emulate_write_cache is the per device attribute for exposing the
>> > SCSI WriteCacheEnabled bit to the initiator, and does not directly
>> > enable buffered I/O operation for FILEIO backends.
>> >
>> >>
>> >> Here is the output of fileio:
>> >>
>> >> Rounding down aligned max_sectors from 32767 to 32760
>> >> Status: DEACTIVATED  Max Queue Depth: 0  SectorSize: 512  HwMaxSectors: 32760
>> >>         TCM FILEIO ID: 0        File: /dev/md12  Size: 0  Mode: O_DSYNC
>> >>
>> >> I'm still trying to get the optimal throughput for sequential
>> >> READ/WRITE at the moment. Reliability is secondary as I'm trying to
>> >> stress the system performance for now. Is there anything else that I
>> >> can tune for FILEIO (or IBLOCK)?
>> >
>> > Buffered FILEIO can be enabled with targetcli + rtslib, but not during
>> > setup with the legacy lio-utils code.
>> >
>> > The easiest way to do this is modifying /etc/target/tcm_start.sh after
>> > the configuration has been saved, and then including the
>> > ',fd_buffered_io=1' parameter for each FILEIO device like so:
>> >
>> > tcm_node --establishdev fileio_0/test fd_dev_name=/tmp/test,fd_dev_size=2147483648,fd_buffered_io=1
>> >
>> > From there restart the target, and verify the same output from
>> > /sys/kernel/config/target/core/$FILEIO_HBA/$FILEIO_DEV/info as above, which
>> > should now read:
>> >
>> > Status: DEACTIVATED  Max Queue Depth: 0  SectorSize: 512  HwMaxSectors: 32760
>> >          TCM FILEIO ID: 0        File: /dev/md12  Size: 0  Mode: Buffered-WCE
>> >
>> > --nab
>> >
>>
>> After I added fd_buffered_io I'm getting very similar results as
>> SCST's fileio mode. There are still more to optimize but I'm happy
>> that I can see similar results now.
>>
>> Thanks so much for all your helps!
>>
>
> Again to confirm, the SCST numbers that where previously quoted where
> all with buffered FILEIO then, right..?
>
> That would make sense, as I don't think 4x SATA 3 SSDs with 12 active
> partitions can sustain 1000 MB/sec of large block random I/O.
>
> --nab
>
Yes it's with buffered FILEIO when SCST was used.
Thanks,
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux