Re: Ceph Jewel and Ubuntu 16.04

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 17, 2018 at 11:37 AM, Richard Hesketh
<richard.hesketh@xxxxxxxxxxxx> wrote:
> Yes, that's what I did - as long as you don't affect the OSD data/partitions, they should come up again just fine once you reinstall ceph. I expect once the mandatory switch to ceph-volume finally happens this process might get a little more complicated but for jewel you're still using ceph-disk so it's just using udev rules to identify the OSDs and it will bring them up at start.

Just a quick note here that ceph-volume has the ability to take over
ceph-disk OSDs, by taking over the systemd units and disable the
somewhat problematic UDEV rules.

http://docs.ceph.com/docs/master/ceph-volume/simple/#ceph-volume-simple

The process is rather simple, and there is support for the various
different OSDs that ceph-disk would create (dmcrypt, plain, etc...)

>
> Rich
>
> On 17/04/18 15:50, Shain Miley wrote:
>> Rich,
>>
>> Thank you for the information.  Are you suggesting that I do a clean install of the OS partition...reinstall the Ceph packages, etc...and then Ceph should be able to find the OSDs and all the original data.
>>
>> I did do something similar one time when we had a MBR failure on a single node and it would no longer boot up..however that was quite some time ago.
>>
>> As far as I remember Ceph found all the OSD data just fine and everything did startup in a good state post OS reinstall.
>>
>> Thanks again for your help on this issue.
>>
>> Shain
>>
>>
>> On 04/17/2018 06:00 AM, Richard Hesketh wrote:
>>> On 16/04/18 18:32, Shain Miley wrote:
>>>> Hello,
>>>>
>>>> We are currently running Ceph Jewel (10.2.10) on Ubuntu 14.04 in production.  We have been running into a kernel panic bug off an on for a while and I am starting to look into upgrading as a possible solution.  We are currently running version 4.4.0-31-generic kernel on these servers and have run across this same issue on multiple nodes over the course of the last 12 months.
>>>>
>>>> My 3 questions are as follows:
>>>>
>>>> 1)Are there any known issues upgrading a Jewel cluster to 16.04?
>>>>
>>>> 2)At this point is Luminous stable enough to consider upgrading as well?
>>>>
>>>> 3)If I did decide to upgrade both Ubuntu and Ceph..in which order should the upgrade occur?
>>>>
>>>> Thanks in advance for any insight that you are able to provide me on this issue!
>>>>
>>>> Shain
>>> When Luminous came out I did a multipart upgrade where I took my cluster from 14.04 to 16.04 (HWE kernel line), updated from jewel to luminous and migrated OSDs from filestore to bluestore (in that order). I had absolutely no issues throughout the process. The only thing I would suggest is that you reinstall your nodes to 16.04 rather than release-upgrading - previous experience with trying to release-upgrade on other hosts was sometimes painful, rebuilding was easier.
>>>
>>> Rich
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux