Ceph has a several problems right now: a) due of strict posix sematics ceph can't handle high throughput. it very eays to hit max iops (even if we on tmpfs with journal) b) broken timestamps. in my case it affected HBase HLog functionality, so it is possible to not recover after fail. For 6 nodes cluster (I use xfs, because btrfs degrades on high volumes) Ceph 0.43 (from my github) ------------------------------------------------------------------ Date & time: Thu Mar 01 18:38:16 MSK 2012 Number of files: 36 Total MBytes processed: 360000 Throughput mb/sec: 2.8411840839866644 Average IO rate mb/sec: 2.8668723106384277 IO rate std deviation: 0.29681573660544924 Test exec time sec: 3991.457 Original hadoop ------------------------------------------------------------------ ----- TestDFSIO ----- : write Date & time: Thu Mar 01 14:08:53 MSK 2012 Number of files: 36 Total MBytes processed: 360000 Throughput mb/sec: 4.59152733368745 Average IO rate mb/sec: 4.596996784210205 IO rate std deviation: 0.16118933245583172 Test exec time sec: 2313.589 2012/3/15 Matt Weil <mweil@xxxxxxxxxxxxxxxx>: >> http://ceph.newdream.net/wiki/Using_Hadoop_with_Ceph > > > What are the plans going forward. > > Are there any benchmarks out there between HDFS and ceph? > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Andrey. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html