I do test follow the below steps: + create image with size 100Gb in pool data
+ after that, I do map that image on one server. + and do mkfs.xfs /dev/rbd0 -> mount /deve/rbd0 /mnt + I do the write benchmark on that mount point with dd tool : dd if=/dev/zero of=/mnt/good2 bs=1M count=10000 oflag=direct From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
"rados bench write", you mean? Or something else? Have you checkd the disk performance of each OSD outside of Ceph? In moving from one to two OSDs your performance isn't actually going to go up because you're replicating all the data. It ought to stay flat rather than dropping, but my
guess is your second disk is slow.
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com