We have generally been running the latest non LTS 'stable' release since my cluster is slightly less mission critical than others, and there were important features to us added in both Infernalis and Kraken. But i really only care about RGW. If the rgw component could be split out of ceph into a plugin and independently updated, it'd be awesome for us.
A minor bugfix to radosgw shouldn't be blocked by issues with RBD, for example. I don't care at all.
Could have packages like:
ceph-core
ceph-radosgw
ceph-rbd ...
ceph-mgr..
Might increase the testing workload, but automation should take care of that...
ceph-mgr is also similar. Minor (or even major) updates to the GUI dashboard shouldn't be blocked rolling out to users because we're waiting on a new RBD feature or critical RGW fix.
radosgw and mgr are really 'clients', after all.
-Ben
On Mon, Sep 11, 2017 at 3:30 PM, John Spray <jspray@xxxxxxxxxx> wrote:
This is my preferred option (second choice would be the next one up,On Wed, Sep 6, 2017 at 4:23 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> Hi everyone,
>
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
>
> With kraken and luminous we missed our schedule by a lot: instead of
> releasing in October and April we released in January and August.
>
> A few observations:
>
> - Not a lot of people seem to run the "odd" releases (e.g., infernalis,
> kraken). This limits the value of actually making them. It also means
> that those who *do* run them are running riskier code (fewer users -> more
> bugs).
>
> - The more recent requirement that upgrading clusters must make a stop at
> each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel
> -> lumninous) has been hugely helpful on the development side by reducing
> the amount of cross-version compatibility code to maintain and reducing
> the number of upgrade combinations to test.
>
> - When we try to do a time-based "train" release cadence, there always
> seems to be some "must-have" thing that delays the release a bit. This
> doesn't happen as much with the odd releases, but it definitely happens
> with the LTS releases. When the next LTS is a year away, it is hard to
> suck it up and wait that long.
>
> A couple of options:
>
> * Keep even/odd pattern, and continue being flexible with release dates
>
> + flexible
> - unpredictable
> - odd releases of dubious value
>
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
> + predictable schedule
> - some features will miss the target and be delayed a year
>
> * Drop the odd releases but change nothing else (i.e., 12-month release
> cadence)
>
> + eliminate the confusing odd releases with dubious value
>
> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
> difference between the current even/odd pattern we've been doing.
>
> + eliminate the confusing odd releases with dubious value
> + waiting for the next release isn't quite as bad
> - required upgrades every 9 months instead of ever 12 months
i.e. same thing but annually).
Our focus should be on delivering solid stuff, but not necessarily
bending over backwards to enable people to run old stuff. Our
commitment to releases should be that there are either fixes for that
release, or a newer (better) release to upgrade to. Either way there
is a solution on offer (and any user/vendor who wants to independently
maintain other stable branches is free to do so).
John
> * Drop the odd releases, but relax the "must upgrade through every LTS" to
> allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
> nautilus). Shorten release cycle (~6-9 months).
>
> + more flexibility for users
> + downstreams have greater choice in adopting an upstrema release
> - more LTS branches to maintain
> - more upgrade paths to consider
>
> Other options we should consider? Other thoughts?
>
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com