Re: Hadoop/Ceph and DFS IO tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



yep  - thats right. 3 OSD daemons per node.


On Thu, Jul 11, 2013 at 9:16 AM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
On Wed, Jul 10, 2013 at 6:23 PM, ker can <kercan74@xxxxxxxxx> wrote:
>
> Now separating out the journal from data disk ...
>
> HDFS write numbers (3 disks/data node)
> Average execution time: 466
> Best execution time     : 426
> Worst execution time   : 508
>
> ceph write numbers (3 data disks/data node + 3 journal disks/data node)
> Average execution time: 610
> Best execution time     : 593
> Worst execution time   : 635
>
> So ceph was about 1.3x slower for the average case when journal & data are
> separated .. a 70% improvement over the case where journal + data are on the
> same disk - but still a bit off from the HDFS performance.

Were you running 3 OSDs per node (an OSD per data/journal drive pair)?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux