> > > > > > > > Is it possible to compile Nautilus for el9? Or maybe just the osd's? > > > > > > > I was thinking of updating first to el9/centos9/rocky9 one node at a > time, and after that do the ceph upgrade(s). I think this will give me > the least intrusive upgrade path. > > > > However that requires the availability of Nautilus el9 rpms. I have > been trying to build them via https://github.com/ceph/ceph and via > rpmbuild ceph.spec. And without applying to many 'hacks', currently I > seem to get around 30%-50%(?) of the code build[1] (albeit with quite > some warnings[2]). This leads me to believe that it is most likely > possible to build these for el9. > > > > Obviously I prefer to have someone with experience do this. Is it > possible someone from the ceph development team can build these rpms for > el9? Or are there serious issues that prevent this? > > I would say, any such untested hacks of an EOL release are too risky Yes I agree, that is why I would like to have input of the development team. Currently I am able to build ceph although building rpms from ceph.spec fails (missing binaries) I think this upgrade path could be nice for a lot of clusters still sitting on el7. current hacks are - building from local boost - disabling some things in cmake - downgraded librabbitmq 0.9.0 for el9 I also have the impression that el8 is quite close to el9, otherwise I would have ran into more issues. > from the sysadmin perspective. My preferred approach would be to > migrate to containerized Ceph Nautilus at least temporarily I don't want to migrate to podman, I already have different container system. > (ceph-ansible can do it), then upgrade the hosts to EL9 while still > keeping Nautilus, then, still containerized, upgrade to a more recent > Ceph release (but note that you can't upgrade from nautilus to Quincy > directly, you need Octopus or Pacific as a middle step), and then I would like to wait a bit with the pacific upgrade, until this performance problem is clear/solved. I can't spent to much time on checking if this influences my workload badly. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx