Re: Extremely slow small files rewrite performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you enable debugging on the client ("debug ms = 1", "debug client
= 20") and mds ("debug ms = 1", "debug mds = 20"), run this test
again, and post them somewhere for me to look at?

While you're at it, can you try rados bench and see what sort of
results you get?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Oct 21, 2014 at 10:57 AM, Sergey Nazarov <natarajaya@xxxxxxxxx> wrote:
> It is CephFS mounted via ceph-fuse.
> I am getting the same results not depending on how many other clients
> are having this fs mounted and their activity.
> Cluster is working on Debian Wheezy, kernel 3.2.0-4-amd64.
>
> On Tue, Oct 21, 2014 at 1:44 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>> Are these tests conducted using a local fs on RBD, or using CephFS?
>> If CephFS, do you have multiple clients mounting the FS, and what are
>> they doing? What client (kernel or ceph-fuse)?
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Tue, Oct 21, 2014 at 9:05 AM, Sergey Nazarov <natarajaya@xxxxxxxxx> wrote:
>>> Hi
>>>
>>> I just built a new cluster using this quickstart instructions:
>>> http://ceph.com/docs/master/start/
>>>
>>> And here is what I am seeing:
>>>
>>> # time for i in {1..10}; do echo $i > $i.txt ; done
>>> real 0m0.081s
>>> user 0m0.000s
>>> sys 0m0.004s
>>>
>>> And if I try to repeat the same command (when files already created):
>>>
>>> # time for i in {1..10}; do echo $i > $i.txt ; done
>>> real 0m48.894s
>>> user 0m0.000s
>>> sys 0m0.004s
>>>
>>> I was very surprised and then just tried to rewrite a single file:
>>>
>>> # time echo 1 > 1.txt
>>> real 0m3.133s
>>> user 0m0.000s
>>> sys 0m0.000s
>>>
>>> BTW, I dont think it is the problem with OSD speed or network:
>>>
>>> # time sysbench --num-threads=1 --test=fileio --file-total-size=1G
>>> --file-test-mode=rndrw prepare
>>> 1073741824 bytes written in 23.52 seconds (43.54 MB/sec).
>>>
>>> Here is my ceph cluster status and verion:
>>>
>>> # ceph -w
>>>     cluster d3dcacc3-89fb-4db0-9fa9-f1f6217280cb
>>>      health HEALTH_OK
>>>      monmap e4: 4 mons at
>>> {atl-fs10=10.44.101.70:6789/0,atl-fs11=10.44.101.91:6789/0,atl-fs12=10.44.101.92:6789/0,atl-fs9=10.44.101.69:6789/0},
>>> election epoch 40, quorum 0,1,2,3 atl-fs9,atl-fs10,atl-fs11,atl-fs12
>>>      mdsmap e33: 1/1/1 up {0=atl-fs12=up:active}, 3 up:standby
>>>      osdmap e92: 4 osds: 4 up, 4 in
>>>       pgmap v8091: 192 pgs, 3 pools, 123 MB data, 1658 objects
>>>             881 GB used, 1683 GB / 2564 GB avail
>>>                  192 active+clean
>>>   client io 1820 B/s wr, 1 op/s
>>>
>>> # ceph -v
>>> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
>>>
>>> All nodes connected with gigabit network.
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux