Re: CephFS posix test performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Jul 1, 2015, at 00:34, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> 
> On Tue, Jun 30, 2015 at 11:37 AM, Yan, Zheng <zyan@xxxxxxxxxx> wrote:
>> 
>>> On Jun 30, 2015, at 15:37, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
>>> 
>>> On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng <zyan@xxxxxxxxxx> wrote:
>>>> I tried 4.1 kernel and 0.94.2 ceph-fuse. their performance are about the same.
>>>> 
>>>> fuse:
>>>> Files=191, Tests=1964, 60 wallclock secs ( 0.43 usr  0.08 sys +  1.16 cusr  0.65 csys =  2.32 CPU)
>>>> 
>>>> kernel:
>>>> Files=191, Tests=2286, 61 wallclock secs ( 0.45 usr  0.08 sys +  1.21 cusr  0.72 csys =  2.46 CPU)
>>> 
>>> On Friday, I tried stock 3.10 vs 4.1 and they were about the same as
>>> well (a few tests failed in 3.10 though).  However Dan is using
>>> 3.10.0-229.7.2.el7.x86_64, which is 3.10 with a lot of backports, so
>>> it's not quite the same.  Dan, are the numbers you are seeing
>>> consistent?
>>> 
>> 
>> I just tried 3.10.0-229.7.2.el7 kernel. it’s a little slower than 4.1 kernel
>> 
>> 4.1:
>> Files=191, Tests=2286, 61 wallclock secs ( 0.45 usr  0.07 sys +  1.24 cusr  0.76 csys =  2.52 CPU)
>> 
>> 3.10.0-229.7.2.el7:
>> Files=191, Tests=1964, 75 wallclock secs ( 0.45 usr  0.09 sys +  1.73 cusr  5.04 csys =  7.31 CPU)
>> 
>> Dan, did you run the test on the same client machine. I think network latency affects run time of this test a lots
>> 
> 
> All the tests run on the same client, but it seems there is some
> variability in the tests. Now I get:
> 
> Linux 3.10.0-229.7.2.el7.x86_64
> Files=184, Tests=1957, 91 wallclock secs ( 0.72 usr  0.19 sys +  5.68
> cusr 10.09 csys = 16.68 CPU)
> 
> Linux 4.1.0-1.el7.elrepo.x86_64
> Files=184, Tests=1957, 84 wallclock secs ( 0.75 usr  0.44 sys +  5.17
> cusr  9.77 csys = 16.13 CPU)
> 
> ceph-fuse 0.94.2:
> Files=184, Tests=1957, 78 wallclock secs ( 0.69 usr  0.17 sys +  5.08
> cusr  9.93 csys = 15.87 CPU)
> 
> 
> I don't know if it's related -- and maybe I misunderstood something
> fundamental -- but we don't manage to get FUSE or the kernel client to
> use the page cache:
> 
> I have fuse_use_invalidate_cb = true then used fincore to see what's cached:
> 
> # df -h .
> Filesystem      Size  Used Avail Use% Mounted on
> ceph-fuse       444T  135T  309T  31% /cephfs
> # cat zero > /dev/null
> # linux-fincore zero
> filename
>                        size        total_pages    min_cached page
>  cached_pages        cached_size        cached_perc
> --------
>                        ----        -----------    ---------------
>  ------------        -----------        -----------
> zero
>                 104,857,600             25,600                 -1
>             0                  0               0.00
> ---
> total cached size: 0
> 
> The kernel client has the same behaviour. Is this expected?

yes. PJD only tests metadata operations. page cache is not involved in these operations.

Regards
Yan, Zheng


> 
> Cheers, Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux