On Thu, 2015-03-05 at 14:31 +0100, Christoph Hellwig wrote: > For about 8 month I've merged almost every scsi commit through the > scsi-queue staging tree, and it seems to have worked out well enough. > > I've been too busy for the next cycle, so 4.1 will probably have to live > without it. I'd like to get feedback on how the tree worked for contributors > and driver maintainers, and brainstorm how to move forward with it, preferably > some form of real team maintainance that avoids single points of failure. I'd like to thank Christoph for doing this, it's been an enormous help. Here's what we'll do for 4.1: I need all the current Maintainers to collect the patches and reviews in their area and send them to the list as a series. We'll be adhering to the guidelines Christoph laid down for inclusion: - the patch needs at least two positive reviews (non-author signoff, reviewed-by or acked-by tags). In practice this means it had at least one and I added another one. As an exception I also take trivial and important fixes if they only have a Tested-by: instead of a second review. - the patch has no negative review on the mailing list - the patch applies cleanly - the patch compiles (drivers for architectures I can't test excluded) - for core the core branch: the patch survives a full xfstests run For the last requirement, the 0 day kernel test project will be checking this. That means negative reports from the 0 day project on a patch will be grounds for removal. I'll try to curate the patches in areas without maintainers (like the core). Remember, in all cases, you get an email from my automation infrastructure when a patch is added (or removed) from any of the SCSI trees, so if you haven't seen the email, the patch isn't in the tree. You can also see the state of the git trees here: http://git.kernel.org/cgit/linux/kernel/git/jejb/scsi.git/ with the misc branch being for 4.1 and the fixes branch being for 4.0-rc James -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html