On 01/09/2014 10:43 AM, Bradley Kite wrote:
On 9 January 2014 15:44, Christian Kauhaus <kc@xxxxxxxxxx
<mailto:kc@xxxxxxxxxx>> wrote:
Am 09.01.2014 10:25, schrieb Bradley Kite:
> 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM
disks (4TB)
> plus a 160GB SSD.
> [...]
> By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops
and ~2000
> iops write (but with 15KRPM SAS disks).
I think that comparing Ceph on 7.2k rpm SATA disks against iSCSI on
15k rpm
SAS disks is not fair. The random access times of 15k SAS disks are
hugely
better compared to 7.2k SATA disks. What would be far more
interesting is to
compare Ceph against iSCSI with identical disks.
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · kc@xxxxxxxxxx
<mailto:kc@xxxxxxxxxx> · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11 <tel:%2B49%20345%20219401-11>
Python, Pyramid, Plone, Zope · consulting, development, hosting,
operations
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Christian,
Yes, for a true comparison it would be better but this is the only iscsi
SAN that we have available for testing, so I really only compared
against it to get a "gut feel" for relative performance.
I'm still looking for clues that might indicate why there is such a huge
difference between the read & write rates on the ceph cluster though.
One thing you may want to look at is some comparisons we did with fio on
different RBD volumes with varying io depths and volume/guest counts:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/
You'll probably be most interested in the 4k random read/write results
for XFS. It would be interesting to see if you saw any difference with
more or less volumes at different io depths. Also, sorry if I missed
it, but is this QEMU/KVM? If so, did you enable RBD cache?
I've been doing some more testing, and the raw random read/write
performance of the individual bcache OSD's is around 1500 iops/second so
I feel I should be getting significantly more from ceph than what I am
able to.
Of course, as soon as bcache stops providing benefits (ie data is pushed
out of the SSD cache) then the raw performance drops to a standard SATA
drive of around 120 IOPS.
Regards
--
Brad.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com