On Tue, Jun 02, 2015 at 12:59:14PM +0200, Tomas Vondra wrote: > I disagree. The fact that we have 1 release per year means there's one > deadline, and if you miss it you have to wait another year for the feature > to be available in official release. That's a lot of pressure and > frustration for developers. With more frequent releases, this issue gets > less serious. Of course, it's not a silver bullet (e.g. does not change > review capacity). But it's the second part of this that is the main issue. For the people who are driving features in postgres now are overwhelmingly the most advanced users, who also want rock solid database reliability. After all, the simple use cases (the ones that basically treat the DBMS as an expensive version of a flat filesystem) have been solved for many releases quite well in Postgres. These are the cases that people used to compare with MySQL, and MySQL isn't any better at them any more than Postgres. But Postgres isn't really any better at them than MySQL, either, because the basic development model along those lines is low sophistication and is automatically constrained by round tripping between the application and the database. Anyone who wants to scale for real understands that and has already figured out the abstractions they need. But those are also the people with real data at stake, which is why they picked Postgres as opposed to some eventually-consistent mostly-doesn't-lose-data distributed NoSQL system. The traditional Postgres promise that it never loses your data is important to all those people too. Yet they're pressing for hot new features because it's the nifty database tricks you can do that allow you to continue to build ever-larger database systems. If the model switched to more frequent "feature releases" with less frequent "LTS" releases for stability, one of two things would happen: 1. There'd be pressure to get certain high-value features into the LTS releases. This is in effect the exact issue there is now. 2. The people who really need high quality and advanced features would all track the latest release anyway, because their risk tolerance is actually higher than they think (or more likely, they're doing the risk calculations wrong). The effect of this would be to put pressure on the intermediate releases for higher quality, which would result in neglect of the quality issues of the LTS anyway. And on top of the above, you'd split the developer community between those working on LTS and those not. Given that the basic problem is "not enough developers to get the quality quite right against the desired features", I don't really see how it helps. As nearly as I can tell, noting that I'm watching almost entirely from the sidelines, what really happened in the case that has everyone worried is that one highly-esteemed developer claimed something and maybe should have relinquished sooner given his workload. That happens; nobody's perfect. It's frustrating, but this is not the only community to have had that issue (cf. Linux kernel, for an approximately infinite series of examples of this). I am not sure that the answer to this is a rejigging of the basic development model. Hard cases make bad law. Best regards, A -- Andrew Sullivan ajs@xxxxxxxxxxxxxxx -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general