Ceph Performance MB/sec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi to all,
       I'm testing Ceph with a 4 server configuration with 300GB 15K SAS disks used for the OSD's, the journal is included inside the OSD partition also. And i want to know if it's possible with Ceph to obtain a 400-700MB/sec of throughput. I've tested with XFS and BRTFS when doing the rbd volume creation, and the OSDs are created with XFS. I've mounted a RBD to a client and run some dd statements for testing, and I've obtain almost 1.1Gb/sec and 700MB/sec... i really don't know if this values are possible or not, they make me doubt. Any output or performance test or comment will be really appreciated. Some output of the tests:

ceph@ceph-deploy01:/dev/rbd/ceph-openstack$ sudo mkfs.btrfs /dev/rbd/ceph-openstack/testPool

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/rbd/ceph-openstack/testPool
    nodesize 4096 leafsize 4096 sectorsize 4096 size 10.00GB
Btrfs Btrfs v0.19
ceph@ceph-deploy01:/dev/rbd/ceph-openstack$
ceph@ceph-deploy01:/dev/rbd/ceph-openstack$ sudo mount /dev/rbd/ceph-openstack/testPool /mnt/ceph-btrfs-test
ceph@ceph-deploy01:/dev/rbd/ceph-openstack$ cd /mnt/ceph-btrfs-test


ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3 4; do sudo dd if=/dev/zero of=./a bs=1M count=1000; done
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.774497 s, 1.4 GB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.919454 s, 1.1 GB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.954178 s, 1.1 GB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.943058 s, 1.1 GB/s
ceph@ceph-deploy01:/mnt/ceph-btrfs-test$
 
ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3; do sudo dd if=/dev/zero of=./b bs=1G count=4; done
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 4.49293 s, 956 MB/s
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 7.15038 s, 601 MB/s
4+0 records in
4+0 records out
3214934016 bytes (4.2 GB) copied, 4.36167 s, 737 MB/s

ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3; do sudo dd if=/dev/zero of=./b bs=1G count=3; done
3+0 records in
3+0 records out
3221225472 bytes (3.2 GB) copied, 3.68271 s, 875 MB/s
3+0 records in
3+0 records out
3221225472 bytes (3.2 GB) copied, 4.25035 s, 758 MB/s
3+0 records in
3+0 records out
3221225472 bytes (3.2 GB) copied, 4.41428 s, 730 MB/s
ceph@ceph-deploy01:/mnt/ceph-btrfs-test$


ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3; do sudo dd if=/dev/zero of=./b bs=4K count=10000; done
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.056885 s, 720 MB/s
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0627104 s, 653 MB/s
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0617505 s, 663 MB/s


ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3; do sudo dd if=/dev/zero of=./b bs=1G count=2; done
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 2.59319 s, 828 MB/s
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 3.10539 s, 692 MB/s
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 3.05903 s, 702 MB/s


Thanks in advance,

Best regards,
 

German Anders






 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux