On 11/26/2013 08:14 AM, Patrick McGarry wrote:
Adding ceph-user.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Nov 26, 2013 at 1:49 AM, <haiquan517@xxxxxxxx> wrote:
Hi ,
I'm from China , recently we are testing ceph block storage
performance, but now we encounter a issue. the issue is : we storage about
10 image file to rbd block storage, and image file zise about 100G, now the
rbd block storage read / write is slow , read speed about: 50M/s; write
speed about:80M/s . we test environment is : 5 servers ( 2 mds, 3 mon,
5 osd) , and 33T osd , and 10000/M network , 32G memory.
now copy this image to rbd same directory very slow , could you
pls help me analysis?
thanks a lot!!!
Hi!
For RBD you don't need MDS servers, so you can take those out of the
cluster if you like. Otherwise, I suspect maybe you could have a more
optimal configuration. If each OSD is 33T, does that mean you are using
a RAID array for each OSD? Typically Ceph performs best with 1 disk per
OSD. Also, what replication level are you using?
Beyond that, the hardware and software setup may be important.
- What controller and are you using Writeback cache?
- Do you use SSDs for journals?
- What CPU do you have in each node?
- Does your chassis have expanders?
- What OS/Kernel?
- Are you using QEMU/KVM or Kernel RBD Driver?
- If QEMU/KVM, do you have RBD writeback cache enabled?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com