Re: low performance of ceph, why ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/26/2012 09:53 AM, Hosfore wrote:
Now I configure a small cluster of ceph, with mon and mds on the same machine,
two osds on another two machine, and I have configure the data and journal dir
on two disks separately. But the test result of mdtest is so poor as below:
------------------------------------------------------------------------------
fs90:/mnt/ceph # mdtest -d /mnt/ceph/tt3 -n 200 -i 2 -w 0
-- started at 07/26/2012 22:37:08 --

mdtest-1.8.3 was launched with 1 total task(s) on 1 nodes
Command line used: mdtest -d /mnt/ceph/tt3 -n 200 -i 2 -w 0
Path: /mnt/ceph
FS: 1.8 TiB   Used FS: 5.8%   Inodes: 0.0 Mi   Used Inodes: 100.0%

1 tasks, 200 files/directories

SUMMARY: (of 2 iterations)
    Operation                  Max        Min       Mean    Std Dev
    ---------                  ---        ---       ----    -------
    Directory creation:   2218.451   1138.848   1678.650    539.802
    Directory stat    : 870187.552 840541.884 855364.718  14822.834
    Directory removal :   2830.938   2828.647   2829.793      1.146
    File creation     :   1987.224   1972.523   1979.873      7.350
    File stat         : 854237.067 850771.602 852504.335   1732.732
    File removal      :   2651.082   2164.132   2407.607    243.475
    Tree creation     :   1680.410   1559.801   1620.105     60.305
    Tree removal      :      1.011      0.578      0.794      0.216

-- finished at 07/26/2012 22:37:11 --
-----------------------------------------------------------------------------

Hi,

Thanks for taking the time to test this. I haven't been able to really dig into metadata tests like mdtest yet, though it's on my list of things to do! For now, my guess is that all of the overhead by having more layers of code, network, and general lack of optimizations with CephFS are probably holding things back. This is something we will eventually be working on, but right now our focus is more on RadosGW and RBD. You may want to look at:

http://ceph.newdream.net/papers/weil-ceph-osdi06.pdf

On page 10 there are some MDS performance numbers. With a single MDS it looks like your numbers are roughly in-line with the makedirs and makefiles numbers Sage reported at the time.

Mark

--
Mark Nelson
Performance Engineer
Inktank
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux