Re: simple performance tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You were running all the Ceph components on the same host? E.g, mon, mds
and osd on the same machine?

Imho that is not the way to test Ceph.

I've done some benchmarking myself with 6 physical machines for OSD's
(different hw in each machine) and i was seeing about 30 ~ 40MB/sec over
a Gigabit network.

Ofcourse, that could be a lot better, but i think they are focussing on
stability right now rather then performance.

Also, a local FS will be faster than a network filesystem, since the
local filesystem doesn't have to take multiple clients into the
equasion.

Try benchmarking Ceph against NFS and you will start seeing different
results.

-- 
Met vriendelijke groet,

Wido den Hollander
Hoofd Systeembeheer / CSO
Telefoon Support Nederland: 0900 9633 (45 cpm)
Telefoon Support België: 0900 70312 (45 cpm)
Telefoon Direct: (+31) (0)20 50 60 104
Fax: +31 (0)20 50 60 111
E-mail: support@xxxxxxxxxxxx
Website: http://www.pcextreme.nl
Kennisbank: http://support.pcextreme.nl/
Netwerkstatus: http://nmc.pcextreme.nl


On Wed, 2010-06-16 at 09:15 +0000, Thomas Mueller wrote:
> hi
> 
> i've done some performance tests on a small system.
> 
> HW:
> * Supermicro X7SPA-HF (with IMPI and KVMoverIP integrated))
> * Atom D510 1.6GHz DualCore and HT
> * WD RE3 500GB 7200umin disks
> * 4GB RAM
> 
> SW:
> * Debian 5.0 "Lenny"
> * Kernel 2.6.32 (backports.org)
> * ceph/unstable
> * ceph-client-standalone/unstable-backport
> 
> values are MB/s. all tests are local.
> 
> dbench 1 results:
> * 1 disk, btrfs: 115
> * 2 disks, btrfs data/metadata raid0: 115
> * 1 osd/disk, 1 mds, 1 mon: 7
> * 2 osd/disk, 1 mds, 1mon: 6.2117
> * 1 disk, btrfs, samba: 14.02
> * 2 disks, btrfs data/metadata raid0, samba: 13.781
> 
> dbench 10 results:
> * 1 disk, btrfs: 271.724
> * 2 disks, btrfs data/metadata raid0: 270.043
> * 1 osd/disk, 1 mds, 1 mon: -
> * 2 osd/disk, 1 mds, 1mon: 12.1862
> * 1 disk, btrfs, samba: 20.22
> * 2 disks, btfs data/metadata raid0, samba: 20.26
> 
> dd bs=1M count=8192 if=/dev/zero
> * 1 disk, btrfs: 126
> * 2 disks, btrfs data/metadata raid0:  271
> * 1 osd/disk, 1 mds, 1 mon: -
> * 2 osd/disk, 1 mds, 1mon: 51.2
> * 1 disk, btrfs, samba: 95.2
> * 2 disks, btrfs data/metadata raid0, samba: 85.9
> 
> will add a AOC-USAS-H8iR (LSI) RAID card to the mix if  i get the cables 
> needed.
> 
> i'm a bit suprised about the low rates of ceph. i'll try to to the same 
> tests on a more powerfull machine. 
> 
> - Thomas
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux