Re: Upgrade paths beyond octopus on Centos7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What version of docker did you go with on centos 7?  Also, if you are running docker, do you need to bother with the OS upgrade?  Wondering if quincy containers can run on docker on centos 7?  That would save lots of man hours upgrading the clusters( since the whole point of containers is to be OS agnostic ).

I have also edited the octopus version of cephadm to be rocky friendly ( quincy cephadm is by default )... luckily they didn’t make it difficult.

Thanks for the info, keep it coming!

-Brent

-----Original Message-----
From: Gary Molenkamp <molenkam@xxxxxx> 
Sent: Friday, August 12, 2022 9:51 AM
To: Brent Kennedy <bkennedy@xxxxxxxxxx>; ceph-users@xxxxxxx
Subject: Re:  Re: Upgrade paths beyond octopus on Centos7

An update on my testing.

I have a 6 node test ceph cluster deployed as 1 admin and 5 OSDs. Each nodes is running Centos7+podman with cephadm deployment of Octopus. Other than scale, this mirrors my production setup.

On one of the OSD nodes I did a fresh install of RockyLinux8, being sure to not touch the OSD disks.  I did not alter the ceph config in any way before doing this.  I then installed docker, and cephadm (patched to accept 'rocky' as a centos derivative).

Using info from the controller's "ceph auth ls" and "ceph config generate-minimal-conf"  I was able to redeploy the OSDs on the freshly installed server using:
     cephadm ceph-volume lvm list
     cephadm --image <full image path> deploy --fsid <cluster-fsid> --name osd.<newid> --config-json config.json --osd-fsid <osd-fsid> For each of the OSDs on the server.

This was successful with no apparent errors.  Just to check, I also altered the mon deployment to include this updated host, and a mon container was deployed without error.  Test cluster is stable and functional. (so far)

Gary



On 2022-08-11 19:05, Brent Kennedy wrote:
> So I tried to wipe out podman and load docker on my centos 8 stream node to see if could run the Octopus container on it.  It wouldn’t work :(  Now, I didn’t spend too much time on it when it failed on cgroups, so there may be a way.
>
> -Brent
>
> -----Original Message-----
> From: Gary Molenkamp <molenkam@xxxxxx>
> Sent: Monday, August 8, 2022 7:50 AM
> To: ceph-users@xxxxxxx
> Subject:  Re: Upgrade paths beyond octopus on Centos7
>
> I stumbled across this dependency when testing an upgrade of the OS on an existing Ceph storage node.  The existing OS was running Centos7, podman as the container implementation, and hosting 2 OSDs.   When I did a fresh OS deployment of RockyLinux8 and attempted to recreate the containers under podman, there were errors with unsupported parameters to podman.  Sorry, I don't remember the exact errors.
>
> Unfortunately, other projects pulled me away from this for a while, but I will be testing the temporary docker solution sometime later this month.  I'm hoping to use docker as a middle step to get to a supported cephadm/podman/ceph/OS platform.
>
> Gary
>
>
> On 2022-08-07 05:01, Nico Schottelius wrote:
>> Hey Brent, Gary,
>>
>> I was wondering why is ceph/cephadm depending on a specific podman 
>> version? Is cephadm using some specific API versions? Or in other 
>> words, is there no way to use "the wrong podman version" with 
>> "cephadm any version"?
>>
>> Reading your mails I am double puzzled, as I thought that cephadm 
>> would actually solve these kind of issues in the first place and I 
>> would expect it to be be especially stable on RH/Centos.
>>
>> My understanding was that cephadm basically creates containers and 
>> keeps the OS untouched, which imho should work with "any os" and "any 
>> CRI", like k8s works with docker/podman/crio.
>>
>> Best regards,
>>
>> Nico
>>
>> On 2022-08-07 07:11, Brent Kennedy wrote:
>>> Did you ever find an answer?  Have the same issue, stuck in podman 
>>> compatibility purgatory with Centos 7 and octopus containers. 
>>> Cephadm quincy wont even run on centos 7, says not supported.  
>>> Closest thing I can find to a solution is to save the bare metal 
>>> ceph configuration on the node, upgrade it to centos 8 stream, then 
>>> restore the configuration.  After all hosts done, then upgrade to 
>>> quincy and then transition to cephadm.  Unfortunately, I already 
>>> converted one cluster to cephadm on octopus.
>>>
>>> -Brent
>>>
>>> -----Original Message-----
>>> From: Gary Molenkamp <molenkam@xxxxxx>
>>> Sent: Tuesday, May 24, 2022 11:03 AM
>>> To: ceph-users@xxxxxxx
>>> Subject:  Upgrade paths beyond octopus on Centos7
>>>
>>> Good morning,
>>>
>>> I'm looking into viable upgrade paths on my cephadm based octopus
>>> deployment running on Centos7.   Given the podman support matrix for
>>> cephadm, how did others successful move to Pacific under a Rhel8 
>>> based OS?
>>> I am looking to use rocky moving forward, but the latest 8.6 uses 
>>> podman
>>> 4.0 which does not seem to be supported for either Octopus (Podman <
>>> 2.2) or Pacific (Podman > 2.0-3.0).
>>>
>>> I was hoping to upgrade the host OS first before moving from Octopus 
>>> to Pacific to limit the risks, so I'm trying to find a container 
>>> solution the works for supporting the older Octopus and a future 
>>> update to Pacific.
>>> Perhaps I should I switch back to docker based containers until the 
>>> podman compatibility issues stabilize?
>>>
>>> Thanks
>>> Gary

-- 
Gary Molenkamp			Science Technology Services
Systems Administrator		University of Western Ontario
molenkam@xxxxxx                 http://sts.sci.uwo.ca
(519) 661-2111 x86882		(519) 661-3566

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux