Re: Performance problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



More details, please. You ran the same test twice and performance went
up from 17.5MB/s to 394MB/s? How many drives in each node, and of what
kind?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki
<ziemowit@xxxxxxxxxxxxx> wrote:
> Hi,
>
> I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
> here is the performance I am seeing:
>
> [root@triton temp]# !dd
> dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s
> [root@triton temp]# dd if=/dev/zero of=/mnt/temp/test.out bs=512k
> count=10000
> 10000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 299.077 s, 17.5 MB/s
> [root@triton temp]# dd if=/mnt/temp/test.out of=/dev/null bs=512k
> count=1000010000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 13.3015 s, 394 MB/s
>
> Does that look right?  How do I check this is not a network problem, because
> I remember seeing a kernel issue related to large MTU.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux