Re: Hadoop on ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andrey,

On Fri, 16 Mar 2012, Andrey Stepachev wrote:

> Ceph has a several problems right now:
> a) due of strict posix sematics ceph can't handle high throughput.
> it very eays to hit max iops (even if we on tmpfs with journal)

If I had to guess I wouldn't suspect the POSIX/metadata stuff initially, 
as that generally doesn't get in the way of data iops (although it's 
possible).  I take it TestDSFIO is a standard hadoop benchmark?  I 
continue to be annoyed and frustrated that we don't have a java/hadoop 
person in-house to work on this stuff (hint hint, 
http://ceph.newdream.net/jobs).  I hope we can carve some resources out to 
work on this stuff soon.

> b) broken timestamps.  in my case it affected HBase HLog functionality,
> so it is possible to not recover after fail.

I take it this the same problem Noah was seeing?  
(http://tracker.newdream.net/issues/1666)  

Thanks!
sage


> 
> For 6 nodes cluster (I use xfs, because btrfs degrades on high volumes)
> 
> Ceph 0.43 (from my github)
> ------------------------------------------------------------------
>            Date & time: Thu Mar 01 18:38:16 MSK 2012
>        Number of files: 36
> Total MBytes processed: 360000
>      Throughput mb/sec: 2.8411840839866644
> Average IO rate mb/sec: 2.8668723106384277
>  IO rate std deviation: 0.29681573660544924
>     Test exec time sec: 3991.457
> 
> 
> Original hadoop
> ------------------------------------------------------------------
> ----- TestDFSIO ----- : write
>            Date & time: Thu Mar 01 14:08:53 MSK 2012
>        Number of files: 36
> Total MBytes processed: 360000
>      Throughput mb/sec: 4.59152733368745
> Average IO rate mb/sec: 4.596996784210205
>  IO rate std deviation: 0.16118933245583172
>     Test exec time sec: 2313.589
> 
> 2012/3/15 Matt Weil <mweil@xxxxxxxxxxxxxxxx>:
> >> http://ceph.newdream.net/wiki/Using_Hadoop_with_Ceph
> >
> >
> > What are the plans going forward.
> >
> > Are there any benchmarks out there between HDFS and ceph?
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> -- 
> Andrey.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux