Re: Upgrade paths beyond octopus on Centos7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Brent,

thanks a lot for following up on this. Would it be possible to send the
error messsages that you get in both cases?

While I do have my reservations about cephadm (based on experience with
ceph-deploy, ceph-ansible and friends), I would like to drill down to
the core of the problem, as containers *should* indeed be running on
"any" CRI. If they don't, I would expect the usage of parameters that
are unknown to either podman version, however being in the container
specification and unrelated to the actual image.

Do you mind posting the cephadm and podman versions and the corresponding
error messages that you have received with Octopus / Quincy?

Best regards,

Nico

"Brent Kennedy" <bkennedy@xxxxxxxxxx> writes:

> All I can say is that its been impossible this past month to upgrade past
> octopus using cephadm on centos 7.  I thought if I spun up new servers and
> started containers on those using the Octopus cephadm script, I would be ok.
> But both Rocky and Centos 8 stream wont run the older Octopus containers.
> When the containers start on podman 4, they show an error regarding groups.
> Searching on the web for that error only returns posts saying you can ignore
> it, but the container/service wont start.  I thought upgrading to quincy
> would solve this, but then the quincy containers wont run on centos 7, they
> throw an overlay error.  Which is how I ended up with cluster that was
> limping along with one monitor and 132 OSDs.  Just today, I went back and
> manually installed ceph octopus on all the nodes(bare metal install) and
> that got me back to working again.  Based on another post, it seems the best
> way to proceed from here is to upgrade the remaining centos 7 servers to
> centos stream 8 or wipe/install rocky and load octopus bare metal.  Then
> once that is done, upgrade to quincy as bare metal.  Final step would be
> then moving to containers(cephadm).  Unfortunately, I had already adopted
> all the OSD containers, so hopefully I can swap them back to bare metal
> without too much hassle.
>
> This podman issue basically shows the flaw in the thinking that containers
> solve the OS issue( I ran into this with Docker and mesosphere, so I kinda
> knew what I was in for ).  As much as I appreciate the Dev team here at ceph
> and like container methodology, the way this went down is a shame ( unless I
> am missing something ).  I only held back upgrading because of the lack of
> upgrade path and then the centos stream situation, we normally upgrade
> things within 6 months of release.  BTW, I tried to upgrade centos 7 to
> stream 8 and it said all the ceph modules conflicted with upgrade
> components, thus I had to remove them, hence why I am starting fresh with
> each machine( its also quicker with VM images, at least for the VMs ).
>
> The upgrade path discussion I am referring to is titled:  "Migration from
> CentOS7/Nautilus to CentOS Stream/Pacific"
>
> -Brent
>
> -----Original Message-----
> From: Marc <Marc@xxxxxxxxxxxxxxxxx>
> Sent: Sunday, August 7, 2022 5:25 AM
> To: Nico Schottelius <nico.schottelius@xxxxxxxxxxx>; ceph-users@xxxxxxx
> Subject:  Re: Upgrade paths beyond octopus on Centos7
>
>
>> Reading your mails I am double puzzled, as I thought that cephadm
>> would actually solve these kind of issues in the first place and I
>> would
>
> It is being advocated like this. My opinion is, that it is primarily being
> used as a click next next install tool so a broader audience can be reached.
> If the focus is on this, problems such as the one below are imminent.
>
>> expect it to be be especially stable on RH/Centos.
>
> I thought I would give CentOS 9 stream a chance upgrading the office server.
> Converting applications to containers, so I am less dependant in the future
> on the os. On the 10th day or so some container crashed, crashed the whole
> server, and then strangely enough all containers would not start because of
> a little damage in one container layer (not shared with others) of the new
> container.
> Unfortunately mounted all on the root fs, so I had to do fsck of the root
> fs.
>
> Afaik is podman also a fork of the docker code, and fwiiw there have
> developers that coded there own containerizer because they thought the
> docker implementation was not stable.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux