Hi Vijay, There's "something" wrong with the release-3.6 branch. ;) Several of the Rackspace regression VM's have been somehow killed over the last few hours, and needing to be rebuilt. Not sure exactly what's being done to them, as they stop responding to ssh, and the best I can do is get them into rescue mode... where nothing is obviously showing up as being wrong. (but they're still useless when booted normally) Anyway, I'm suspecting there's something in the release-3.6 branch that's causing it. Here's a regression run of release-3.6 branch, with no other CR's applied: http://build.gluster.org/job/rackspace-regression-2GB/638/console Note in the compilation phase these two warnings: /home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c: In function ‘dht_lookup_everywhere_done’: /home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c:1229: warning: implicit declaration of function ‘dht_fill_dict_to_avoid_unlink_of_migrating_file’ Guessing these are significant. :) Also notice that the regression testing phase proper is broken pretty much from the start. So, I think there's something fundamentally busted in release-3.6 atm. Here's the master branch, running on the same VM just a bit before: http://build.gluster.org/job/rackspace-regression-2GB/636/console It compiles fine, and the regression testing seems ok. (I aborted it early due only to impatience :>) Anyway, this is 100% reproducible, I rebooted that VM between runs, cleaned out the workspace and other artifacts. So it shouldn't be due to anything silly like that. Any ideas? Hopefully it's something simple to fix. :) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel