> I am a bit disturbed by the fact that people raise the > "NetBSD regression ruins my life" issue without doing the work of > listing the actual issues encountered. That's because it's not a simple list of persistent issues. As with spurious regression-test failures on Linux, it's an ever changing set of failures that come and go. I have a script which parses the results on build.gluster.org to show which tests are failing that day. http://review.gluster.org/#/c/12510/ With trivial modification, it can show the current failures on NetBSD instead of Linux. Here's the list right now. [08:45:57] ./tests/basic/afr/arbiter-statfs.t .. [08:43:03] ./tests/basic/afr/arbiter-statfs.t .. [08:40:06] ./tests/basic/afr/arbiter-statfs.t .. [08:08:51] ./tests/basic/afr/arbiter-statfs.t .. [08:06:44] ./tests/basic/afr/arbiter-statfs.t .. [08:00:54] ./tests/basic/afr/self-heal.t .. [07:59:56] ./tests/basic/afr/entry-self-heal.t .. [18:05:23] ./tests/basic/quota-anon-fd-nfs.t .. [18:06:37] ./tests/basic/quota-nfs.t .. [18:49:32] ./tests/basic/quota-anon-fd-nfs.t .. [18:51:46] ./tests/basic/quota-nfs.t .. [14:25:37] ./tests/basic/quota-anon-fd-nfs.t .. [14:26:44] ./tests/basic/quota-nfs.t .. [14:45:13] ./tests/basic/tier/record-metadata-heat.t .. So some of us *have* done that work, in a repeatable way. Note that the list doesn't include tests which *hang* instead of failing cleanly, which has recently been causing the entire NetBSD queue to get stuck until someone manually stops those jobs. What I find disturbing is the idea that a feature with no consistently-available owner or identifiable users can be allowed to slow or block every release unless every developer devotes extra time to its maintenance. Even if NetBSD itself is worth it, I think that's an unhealthy precedent to set for the project as a whole. _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel