On 12/06/2014, at 6:58 AM, Niels de Vos wrote: <snip> > If you capture a vmcore (needs kdump installed and configured), we may > be able to see the cause more clearly. That does help, and so will Harsha's suggestion too probably. :) I'll look into it properly later on today. For the moment, I've rebooted the other slaves which seems to put them into an ok state for a few runs. Also just started some rackspace-regression runs on them, using the ones queued up in the normal regression queue. The results are being updated live into Gerrit now (+1/-1/MERGE CONFLICT). So, if you see any regression runs pass on the slaves, it's worth removing the corresponding job from the main regression queue. That'll help keep the queue shorter for today at least. :) Btw - Happy vacation Niels :) /me goes to bed + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel