low performance of ceph, why ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now I configure a small cluster of ceph, with mon and mds on the same machine, 
two osds on another two machine, and I have configure the data and journal dir 
on two disks separately. But the test result of mdtest is so poor as below:
------------------------------------------------------------------------------
fs90:/mnt/ceph # mdtest -d /mnt/ceph/tt3 -n 200 -i 2 -w 0
-- started at 07/26/2012 22:37:08 --

mdtest-1.8.3 was launched with 1 total task(s) on 1 nodes
Command line used: mdtest -d /mnt/ceph/tt3 -n 200 -i 2 -w 0
Path: /mnt/ceph
FS: 1.8 TiB   Used FS: 5.8%   Inodes: 0.0 Mi   Used Inodes: 100.0%

1 tasks, 200 files/directories

SUMMARY: (of 2 iterations)
   Operation                  Max        Min       Mean    Std Dev
   ---------                  ---        ---       ----    -------
   Directory creation:   2218.451   1138.848   1678.650    539.802
   Directory stat    : 870187.552 840541.884 855364.718  14822.834
   Directory removal :   2830.938   2828.647   2829.793      1.146
   File creation     :   1987.224   1972.523   1979.873      7.350
   File stat         : 854237.067 850771.602 852504.335   1732.732
   File removal      :   2651.082   2164.132   2407.607    243.475
   Tree creation     :   1680.410   1559.801   1620.105     60.305
   Tree removal      :      1.011      0.578      0.794      0.216

-- finished at 07/26/2012 22:37:11 --
-----------------------------------------------------------------------------

The file system of osd disk is ext4. So is the config of mds or osd leads to 
this poor result ? I have try this test on the local ext3 file system and the 
dir creation speed is usually more than 10000 times per second. My ceph.conf 
is as below:
-------------------------------------------------------------------------------
[global]

[mon]
        mon data = /data/mon$id

        ; some minimal logging (just message traffic) to aid debugging
        debug ms = 1

[mon.0]
        host = fs98
        mon addr = 10.0.2.98:6789

[mds]
        ; where the mds keeps it's secret encryption keys
        keyring = /data/keyring.$name
       ; where the mds keeps it's secret encryption keys
        keyring = /data/keyring.$name
        mds cache size = 3000000

[mds.alpha]
        host = fs98

[osd]
        ; This is where the btrfs volume will be mounted.
        osd data = /data
        filestore xattr use omap = true

        osd journal = /ceph/journal
        osd journal size = 512

[osd.0]
        host = fs97
[osd.1]
        host= fs91

-------------------------------------------------------------------------------

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux