ESXi+LIO perfomance question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,
in my case i proxy rbd to ESXi, by iSCSI fileio plugin

But i see some performance problem and i can't catch where it

Simple case:
Ceph cluster 15 disks
On proxy machine with rbd disk in FIO i see 900 IOPs 50/50 rwramdom 64 io depth

Then i create iSCSI target on this RBD and connect to it, by ESXi
On rbd data store, i place VM with FIO and run this test again, what i
see by iostat:
On VM with fio
iodepth 64, latency ~700 ms

on proxy machine with iSCSI and RBD:
iodepth ~8 (+- 4) and latency 20-300, and i don't see any problem with CPU load

May be someone already see and fix this problem, as i understand ESXi
won't offload all commands to Proxy machine

Scheme again:
Ceph <---> rbd <---> iSCSI target <----> iSCSI Initiator <---> FIO VM

FIO VM Generate load and have 64 commands in queue with crazy latency
On rbd side, by iostat i see only 8 commands in queue with acceptable latency

How i can debug what generate this problem in the  iSCSI target <---->
iSCSI Initiator Stack?

And i don't observe any latency problem with ioping (i.e. if queue
small enough all works perfect)

Thanks

-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux