On 03/31/2015 08:33 AM, Justin Clift wrote:
Hi all, Ran 20 x regression test jobs on (severely resource constrained) 1GB Rackspace VM's last night (in addition to the 20x normal VM's ones also run). The 1GB VM's have much much slower disk, only one virtual CPU, and 1/2 the RAM of our "standard" 2GB testing VMs. These are the failure results: * 20 x tests/basic/mount-nfs-auth.t Failed test: 40 100% fail rate. ;) * 20 x tests/basic/uss.t Failed tests: 149, 151-153, 157-159 100% fail rate * 11 x tests/bugs/distribute/bug-1117851.t Failed test: 15 55% fail rate * 2 x tests/performance/open-behind.t Failed test: 17 10% fail rate * 1 x tests/basic/afr/self-heald.t Failed tests: 13-14, 16, 19-29, 32-50, 52-65, 67-75, 77, 79-81 5% fail rate * 1 x tests/basic/afr/entry-self-heal.t Failed tests: 127-128 5% fail rate * 1 x tests/features/trash.t Failed test: 57 5% fail rate Wouldn't surprise me if some/many of the failures are due to time out of various sorts in tests. Very slow VMs. ;) Also, most of the regression runs produced cores. Here are the first two: http://ded.ninja/gluster/blk0/
There are 4 cores here, 3 pointing to the (by now hopefully) famous bug #1195415. One of the cores exhibit a different stack etc. Need more analysis to see what the issue could be here, core file: core.16937
http://ded.ninja/gluster/blk1/
There is a single core here, pointing to the above bug again.
Hoping someone has some time to check those quickly and see if there's anything useful in them or not. (the hosts are all still online atm, shortly to be nuked) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel