Hi pranith; The bigtop smoke tests are a good way to go. You can run them against pig hive and so on. In general running a simple mapreduce job like wordcount is a good first pass start. Many other communities like orangefs and so on run Hadoop tests on alternative file systems, you can collaborate with them. There is an hcfs wiki page you can contribute to on Hadoop.apache.org where we detail Hadoop interoperability
|
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel