On 10/2/2013 3:13 PM, Warren Wang wrote:
I agree with Greg that this isn't a great test. You'll need multiple
clients to push the Ceph cluster, and you have to use oflag=direct if
you're using dd.
I was not doing a test of overall performance but, rather, doing a
"smoke test" to see whether its performance was an order of magnitude
different from iSCSI and Amazon EBS. oflag=direct takes the block cache
out of the picture but for a volume of the size I tested was irrelevant.
By contrast, that same dd to an iSCSI volume exported by one of the
servers wrote at 240 megabytes per second. Order of magnitude difference.
The OSDs should be individual drives, not part of a RAID set,
otherwise you're just creating extra work, unless you've reduced the
number of copies to 1 in your ceph config.
If all I wanted was 1 copy I would use iSCSI, which is much faster. My
purpose was to distribute copies to multiple servers / SAS channels. As
for individual drives this is on shared infrastructure that provides NFS
and iSCSI services to the rest of my network, so that is not going to
happen.
What I've seen is that a single threaded Ceph client maxes out around
50 MB/s for us, but the overall capacity is much, much higher.
I suppose I was looking for performance similar to Amazon EBS. Oh well.
It's looking suspiciously like Ceph simply isn't a viable option for my
environment. I can't put out something that performs worse than the
current environment, my users would lynch me. iSCSI is a PITA because if
one of the servers goes down I have to hijack its IP on its replica
server and export its replica volumes to the clients, but at least I can
get more than 50MB/s out of iSCSI. Oh well, it was a good idea anyhow...
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com