Re: ESXi+LIO perfomance question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2015-12-21 10:54 GMT+03:00 Nicholas A. Bellinger <nab@xxxxxxxxxxxxxxx>:
> Hi Timofey,
>
> On Sun, 2015-12-13 at 15:29 +0300, Timofey Titovets wrote:
>> Hi list,
>> in my case i proxy rbd to ESXi, by iSCSI fileio plugin
>>
>> But i see some performance problem and i can't catch where it
>>
>> Simple case:
>> Ceph cluster 15 disks
>> On proxy machine with rbd disk in FIO i see 900 IOPs 50/50 rwramdom 64 io depth
>>
>> Then i create iSCSI target on this RBD and connect to it, by ESXi
>> On rbd data store, i place VM with FIO and run this test again, what i
>> see by iostat:
>> On VM with fio
>> iodepth 64, latency ~700 ms
>>
>> on proxy machine with iSCSI and RBD:
>> iodepth ~8 (+- 4) and latency 20-300, and i don't see any problem with CPU load
>>
>> May be someone already see and fix this problem, as i understand ESXi
>> won't offload all commands to Proxy machine
>>
>> Scheme again:
>> Ceph <---> rbd <---> iSCSI target <----> iSCSI Initiator <---> FIO VM
>>
>> FIO VM Generate load and have 64 commands in queue with crazy latency
>>
>> On rbd side, by iostat i see only 8 commands in queue with acceptable latency
>>
>> How i can debug what generate this problem in the  iSCSI target <---->
>> iSCSI Initiator Stack?
>>
>> And i don't observe any latency problem with ioping (i.e. if queue
>> small enough all works perfect)
>
> It's hard to say based on provided information alone.
>
> A few things to consider though:
>
> FILEIO backends are all using O_DSYNC by default, which effectively
> disables Linux/VFS buffer-cache write-back operation.  That is, all
> FILEIO write become write-through, and no WRITE is acknowledged until
> the underlying storage has marked blocks as persisted.
>
> You'll really want to understand what buffered FILEIO operation means
> for your setup, and implications it has for data loss of acked but not
> flushed WRITEs during a power failure event.
>
> That aside, to use FILEIO backends in buffer-cache mode, you'll need to
> be setting this value at creation time in targetcli, or via "buffered
> yes" in rtslib v3.x config:
>
> storage fileio disk tmpfile {
>     buffered yes
>     path /root/fileio
>     size 2.0GB
>
>     <<<<< SNIP >>>>>
> }
>
> and verify it has been enabled with "Mode: Buffered-WCE" in configfs:
>
> root@scsi-mq:~# cat /sys/kernel/config/target/core/fileio_1/tmpfile/info
> Status: ACTIVATED  Max Queue Depth: 128  SectorSize: 512  HwMaxSectors: 16384
>         TCM FILEIO ID: 0        File: /root/fileio  Size: 2147483648  Mode: Buffered-WCE
>
> That all said, the folks I'm aware of using RBD seriously are using
> IBLOCK backends, in order to communicate asynchronously via struct bio
> directly into make_request_fn() or blk-mq based queue_rq() callback.
>
> So unless you've got a good reason to use FILEIO buffered IO mode, I'd
> recommend considering IBLOCK instead.
>
> --nab
>

Thanks for answer
I use write-back mode, but this not make any difference on performance
I see many hangs with iblock I/O mode

-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux