TCMU file handler peformance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andy,

I'm looking at TCMU's performance.
Just played with tcmu-runner/file_example ...

root@target:~# cat /proc/partitions
major minor  #blocks  name

 259        0  937692504 nvme0n1
 259        1    1048576 nvme0n1p1
   8       32    1048576 sdc
   8       48    1048576 sdd

Frontend for both sdc and sdd is loopback
Backend for sdc is iblock
Backend for sdd is tcmu

The underlying device is /dev/nvme0n1p1

/dev/nvme0n1p1:
Jobs: 4 (f=4): [rrrr] [100.0% done] [967.2MB/0KB/0KB /s] [248K/0/0
iops] [eta 00m:00s]

/dev/sdc(iblock loopback):
Jobs: 4 (f=4): [rrrr] [100.0% done] [965.6MB/0KB/0KB /s] [247K/0/0
iops] [eta 00m:00s]

/dev/sdd(tcmu loopback):
1) First test: drop cache, throughput only about 66M

echo 3 > /proc/sys/vm/drop_caches

Jobs: 4 (f=4): [rrrr] [100.0% done] [66592KB/0KB/0KB /s] [16.7K/0/0
iops] [eta 00m:00s]

2) Second test,the data is already in page cache, so super fast
Jobs: 4 (f=4): [rrrr] [100.0% done] [1360MB/0KB/0KB /s] [348K/0/0
iops] [eta 00m:00s]

I guess the poor performance comes from tcmu-runner/file_example.c.
I'll try to modify file_example.c to use AIO+DIO.

My goal is to verify whether tcmu's file handler can get similar
performance as iblock.

What do you think?

Thanks,
Ming
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux