yep - thats right. 3 OSD daemons per node.
On Thu, Jul 11, 2013 at 9:16 AM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
On Wed, Jul 10, 2013 at 6:23 PM, ker can <kercan74@xxxxxxxxx> wrote:Were you running 3 OSDs per node (an OSD per data/journal drive pair)?
>
> Now separating out the journal from data disk ...
>
> HDFS write numbers (3 disks/data node)
> Average execution time: 466
> Best execution time : 426
> Worst execution time : 508
>
> ceph write numbers (3 data disks/data node + 3 journal disks/data node)
> Average execution time: 610
> Best execution time : 593
> Worst execution time : 635
>
> So ceph was about 1.3x slower for the average case when journal & data are
> separated .. a 70% improvement over the case where journal + data are on the
> same disk - but still a bit off from the HDFS performance.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com