Re: [Virtio-fs] [QUESTION] A performance problem for buffer write compared with 9p

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/8/22 21:07, Miklos Szeredi wrote:
On Thu, Aug 22, 2019 at 2:48 PM wangyan <wangyan122@xxxxxxxxxx> wrote:

On 2019/8/22 19:43, Miklos Szeredi wrote:
On Thu, Aug 22, 2019 at 2:59 AM wangyan <wangyan122@xxxxxxxxxx> wrote:
I will test it when I get the patch, and post the compared result with
9p.

Could you please try the attached patch?  My guess is that it should
improve the performance, perhaps by a big margin.

Further improvement is possible by eliminating page copies, but that
is less trivial.

Thanks,
Miklos

Using the same test model. And the test result is:
        1. Latency
                virtiofs: avg-lat is 15.40 usec, bigger than before(6.64 usec).
                4K: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
                fio-2.13
                Starting 1 process
                Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/142.4MB/0KB /s] [0/36.5K/0
iops] [eta 00m:00s]
                4K: (groupid=0, jobs=1): err= 0: pid=5528: Thu Aug 22 20:39:07 2019
                  write: io=6633.2MB, bw=226404KB/s, iops=56600, runt= 30001msec
                        clat (usec): min=2, max=40403, avg=14.77, stdev=33.71
                         lat (usec): min=3, max=40404, avg=15.40, stdev=33.74

        2. Bandwidth
                virtiofs: bandwidth is 280840KB/s, lower than before(691894KB/s).
                1M: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1
                fio-2.13
                Starting 1 process
                Jobs: 1 (f=1): [f(1)] [100.0% done] [0KB/29755KB/0KB /s] [0/29/0 iops]
[eta 00m:00s]
                1M: (groupid=0, jobs=1): err= 0: pid=5550: Thu Aug 22 20:41:28 2019
                  write: io=8228.0MB, bw=280840KB/s, iops=274, runt= 30001msec
                        clat (usec): min=362, max=11038, avg=3571.33, stdev=1062.72
                         lat (usec): min=411, max=11093, avg=3628.39, stdev=1064.53

According to the result, the patch doesn't work and make it worse than
before.

Is server started with "-owriteback"?

Thanks,
Miklos

.


I used these commands:
virtiofsd cmd:
./virtiofsd -o vhost_user_socket=/tmp/vhostqemu -o source=/mnt/share/ -o cache=always -o writeback
mount cmd:
mount -t virtio_fs myfs /mnt/virtiofs -o rootmode=040000,user_id=0,group_id=0

Thanks,
Yan Wang




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux