On Mon, Jun 17, 2013 at 02:10:27PM -0400, Travis Rhoden wrote: > I'm actually planning this same upgrade on Saturday. Is the memory > leak from Bobtail during deep-scrub known to be squashed? I've been > seeing that a lot lately. this is actually the reason why we're planning to upgrade, too. one of the OSD's went nuts yesterday, and ate up all the memory. Ceph exploded, but - and this is the good news - it recovered smoothly. > I know Bobtail->Cuttlefish is only one way, due to the mon > re-architecting. But in general, whenever we do upgrades we usually > have a fall-back/reversion plan in case things go wrong. Is that ever > going to be possible with Ceph? just from my guts i guess this will stabilize when the mon architecture stabilizes. but ceph is young, and young means going forward only. > - Travis > > On Mon, Jun 17, 2013 at 12:27 PM, Sage Weil <sage@xxxxxxxxxxx> wrote: > > On Mon, 17 Jun 2013, Wolfgang Hennerbichler wrote: > >> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. > >> Are there any outstanding issues that I should be aware of? Anything > >> that could brake my productive setup? > > > > There will be another point release out in the next day or two that > > resolves a rare sequence of errors during the upgrade that can be > > problematic (see the 0.61.3 release notes). There are also several fixes > > for udev/ceph-disk/ceph-deploy on rpm-based distros that will be included. > > If you can wait a couple days I would suggest that. > > > > sage > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- http://www.wogri.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com