I'm actually planning this same upgrade on Saturday. Is the memory leak from Bobtail during deep-scrub known to be squashed? I've been seeing that a lot lately. I know Bobtail->Cuttlefish is only one way, due to the mon re-architecting. But in general, whenever we do upgrades we usually have a fall-back/reversion plan in case things go wrong. Is that ever going to be possible with Ceph? - Travis On Mon, Jun 17, 2013 at 12:27 PM, Sage Weil <sage@xxxxxxxxxxx> wrote: > On Mon, 17 Jun 2013, Wolfgang Hennerbichler wrote: >> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. >> Are there any outstanding issues that I should be aware of? Anything >> that could brake my productive setup? > > There will be another point release out in the next day or two that > resolves a rare sequence of errors during the upgrade that can be > problematic (see the 0.61.3 release notes). There are also several fixes > for udev/ceph-disk/ceph-deploy on rpm-based distros that will be included. > If you can wait a couple days I would suggest that. > > sage > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com