Re: User + Dev Meetup Tomorrow!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,

Nice talk yesterday. :-)

Regarding containers vs RPMs and orchestration, and the related discussion from yesterday, I wanted to share a few things (which I wasn't able to share yesterday on the call due to a headset/bluetooth stack issue) to explain why we use cephadm and ceph orch these days with bare-metal clusters even though, as someone said, cephadm was not supposed to work with (nor support) bare-metal clusters (which actually surprised me since cephadm is all about managing containers on a host, regardless of its type). I also think this explains the observation that was made that half of the reports (iirc) are supposedly using cephadm with bare-metal clusters.

Over the years, we've deployed and managed bare-metal clusters with ceph-deploy in Hammer, then switched to ceph-ansible (take-over-existing-cluster.yml) with Jewel (or was it Luminous?), and then moved to cephadm, cephadm-ansible and ceph-orch with Pacific, to manage the exact same bare-metal cluster. I guess this explains why some bare-metal cluster today are managed using cephadm. These are not new clusters deployed with Rook in K8s environments, but existing bare-metal clusters that continue to servce brilliantly 10 years after installation.

Regarding rpms vs containers, as mentioned during the call, not sure why one would still want to use rpms vs containers considering the simplicity and velocity that containers offer regarding upgrades with ceph orch clever automation. Some reported performance reasons between rpms vs containers, meaning rpms binaries would perform better than containers. Is there any evidence of that?

Perhaps the reason why people still use RPMs is instead that they have invested a lot of time and effort into developing automation tools/scripts/playbooks for RPMs installations and they consider the transition to ceph orch and containerized environments as a significant challenge.

Regarding containerized Ceph, I remember asking Sage for a minimalist CephOS back in 2018 (there was no containers by that time). IIRC, he said maintaining a ceph-specific Linux distro would take too much time and resources, so it was not something considered at that time. Now that Ceph is all containers, I really hope that a minimalist rolling Ceph distro comes out one day. ceph orch could even handle rare distro upgrades such as kernel upgrades as well as ordered reboots. This would make ceph clusters really easier to maintain over time (compared to the last complicated upgrade path from non-containerized RHEL7+RHCS4.3 to containerized RHEL9+RHCS5.2 that we had to follow a year ago).

Bests,
Frédéric.

----- Le 23 Mai 24, à 15:58, Laura Flores lflores@xxxxxxxxxx a écrit :

> Hi all,
> 
> The meeting will be starting shortly! Join us at this link:
> https://meet.jit.si/ceph-user-dev-monthly
> 
> - Laura
> 
> On Wed, May 22, 2024 at 2:55 PM Laura Flores <lflores@xxxxxxxxxx> wrote:
> 
>> Hi all,
>>
>> The User + Dev Meetup will be held tomorrow at 10:00 AM EDT. We will be
>> discussing the results of the latest survey, and users who attend will have
>> the opportunity to provide additional feedback in real time.
>>
>> See you there!
>> Laura Flores
>>
>> Meeting Details:
>> https://www.meetup.com/ceph-user-group/events/300883526/
>>
>> --
>>
>> Laura Flores
>>
>> She/Her/Hers
>>
>> Software Engineer, Ceph Storage <https://ceph.io>
>>
>> Chicago, IL
>>
>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>> M: +17087388804
>>
>>
>>
> 
> --
> 
> Laura Flores
> 
> She/Her/Hers
> 
> Software Engineer, Ceph Storage <https://ceph.io>
> 
> Chicago, IL
> 
> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
> M: +17087388804
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux