Rich, Thank you for the information. Are you suggesting that I do a clean install of the OS partition...reinstall the Ceph packages, etc...and then Ceph should be able to find the OSDs and all the original data. I did do something similar one time when we had a MBR failure on a single node and it would no longer boot up..however that was quite some time ago. As far as I remember Ceph found all the OSD data just fine and
everything did startup in a good state post OS reinstall. Thanks again for your help on this issue. Shain On 04/17/2018 06:00 AM, Richard Hesketh
wrote:
On 16/04/18 18:32, Shain Miley wrote:Hello, We are currently running Ceph Jewel (10.2.10) on Ubuntu 14.04 in production. We have been running into a kernel panic bug off an on for a while and I am starting to look into upgrading as a possible solution. We are currently running version 4.4.0-31-generic kernel on these servers and have run across this same issue on multiple nodes over the course of the last 12 months. My 3 questions are as follows: 1)Are there any known issues upgrading a Jewel cluster to 16.04? 2)At this point is Luminous stable enough to consider upgrading as well? 3)If I did decide to upgrade both Ubuntu and Ceph..in which order should the upgrade occur? Thanks in advance for any insight that you are able to provide me on this issue! ShainWhen Luminous came out I did a multipart upgrade where I took my cluster from 14.04 to 16.04 (HWE kernel line), updated from jewel to luminous and migrated OSDs from filestore to bluestore (in that order). I had absolutely no issues throughout the process. The only thing I would suggest is that you reinstall your nodes to 16.04 rather than release-upgrading - previous experience with trying to release-upgrade on other hosts was sometimes painful, rebuilding was easier. Rich -- NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com