Hi,
The first test was writing 500 mb file and was clocked at 1.2 GBps. The second test was writing 5000 mb file at 17 MBps. The third test was reading the file at ~400 MBps.
On Mon, Apr 8, 2013 at 2:56 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
More details, please. You ran the same test twice and performance went
up from 17.5MB/s to 394MB/s? How many drives in each node, and of what
kind?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki
<ziemowit@xxxxxxxxxxxxx> wrote:
> Hi,
>
> I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
> here is the performance I am seeing:
>
> [root@triton temp]# !dd
> dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000
> 1000+0 records in
> 1000+0 records out
> 524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s
> [root@triton temp]# dd if=/dev/zero of=/mnt/temp/test.out bs=512k
> count=10000
> 10000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 299.077 s, 17.5 MB/s
> [root@triton temp]# dd if=/mnt/temp/test.out of=/dev/null bs=512k
> count=1000010000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 13.3015 s, 394 MB/s
>
> Does that look right? How do I check this is not a network problem, because
> I remember seeing a kernel issue related to large MTU.
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com