Re: Ceph Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have done some similar testing before.
Here are a few things to keep in mind.

1) Ceph writes to the journal then to the filestore.  If you put bcache in front of the entire OSD.  That causes 4 io write operations for each single ceph write, per osd.  One to the journal/cache, second to the journal, third to the filestore/cache and fourth to the filestore.

2) Also writing the journal to a file instead of raw blocks does add some overhead too.  

I would try to use write-around/readonly on bcache and then create a raw partition on the SSD to use for the journal.  I have used a short stroke partition on HDDs before for the journal with good results.



On Thu, Jan 9, 2014 at 11:43 AM, Bradley Kite <bradley.kite@xxxxxxxxx> wrote:
On 9 January 2014 15:44, Christian Kauhaus <kc@xxxxxxxxxx> wrote:
Am 09.01.2014 10:25, schrieb Bradley Kite:
> 3 servers (quad-core CPU, 16GB RAM), each with 4 SATA 7.2K RPM disks (4TB)
> plus a 160GB SSD.
> [...]
> By comparison, a 12-disk RAID5 iscsi SAN is doing ~4000 read iops and ~2000
> iops write (but with 15KRPM SAS disks).

I think that comparing Ceph on 7.2k rpm SATA disks against iSCSI on 15k rpm
SAS disks is not fair. The random access times of 15k SAS disks are hugely
better compared to 7.2k SATA disks. What would be far more interesting is to
compare Ceph against iSCSI with identical disks.

Regards

Christian

--
Dipl.-Inf. Christian Kauhaus <>< · kc@xxxxxxxxxx · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11
Python, Pyramid, Plone, Zope · consulting, development, hosting, operations
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Hi Christian,

Yes, for a true comparison it would be better but this is the only iscsi SAN that we have available for testing, so I really only compared against it to get a "gut feel" for relative performance.

I'm still looking for clues that might indicate why there is such a huge difference between the read & write rates on the ceph cluster though.

I've been doing some more testing, and the raw random read/write performance of the individual bcache OSD's is around 1500 iops/second so I feel I should be getting significantly more from ceph than what I am able to.

Of course, as soon as bcache stops providing benefits (ie data is pushed out of the SSD cache) then the raw performance drops to a standard SATA drive of around 120 IOPS.

Regards
--
Brad.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
-- 
Jason Villalta
Co-founder
Inline image 1
800.799.4407x1230 | www.RubixTechnology.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux