Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We did not upgrade from bobtail to cuttlefish and are still seeing this issue. I posted this on the ceph-users mailinglist and I missed this thread (sorry!) so I didn't know.

Either way, I also have an osd crashing after upgrading to 0.61.6. As said on the other list, I'm more than happy to share log files etc with you guys.

Thanks,

Peter

This is fixed in the cuttlefish branch as of earlier this afternoon. I've
spent most of the day expanding the automated test suite to include
upgrade combinations to trigger this and *finally* figured out that this particular problem seems to surface on clusters that upgraded from bobtail
-> cuttlefish but not clusters created on cuttlefish.

If you've run into this issue, please use the cuttlefish branch build for now. We will have a release out in the next day or so that includes this
and a few other pending fixes.

I'm sorry we missed this one! The upgrade test matrix I've been working
on today should catch this type of issue in the future.

Thanks!
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux