Re: User + Dev Meetup Tomorrow!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Sebastian,

I just checked the survey and you're right, the issue was within the question. Got me a bit confused when I read it but I clicked anyway. Who doesn't like clicking? :-D

What best describes your deployment target? *
1/ Bare metal (RPMs/Binary)
2/ Containers (cephadm/Rook)
3/ Both

How funny is that.

Apart from that, I was thinking of some users having reported that they found the orchestrator a little obscure in its operation/decisions, particularly with regard to the creation of OSDs.

A nice feature would be to have a history of what the orchestrator did with the result of its action and the reason (in case of failure).
A 'ceph orch history' for example (or ceph orch status --details or --history or whatever). It would be much easier to read than the MGR's very verbose ceph.cephadm.log.

Like for example:

$ ceph orch history
DATE/TIME                     TASK                                                            HOSTS                         RESULT
2024-05-24T10:40:44.866148Z   Applying tuned-profile latency-performance                      voltaire,lafontaine,rimbaud   SUCCESS
2024-05-24T10:39:44.866148Z   Applying mds.cephfs spec                                        verlaine,hugo                 SUCCESS
2024-05-24T10:33:44.866148Z   Applying service osd.osd_nodes_fifteen on host lamartine...     lamartine                     FAILED (host has _no_schedule label)
2024-05-24T10:28:44.866148Z   Applying service rgw.s31 spec                                   eluard,baudelaire             SUCCESS

We'd just have to "watch ceph orch history" and see what the orchestrator does in real time.

Cheers,
Frédéric.

----- Le 24 Mai 24, à 15:07, Sebastian Wagner sebastian.wagner@xxxxxxxx a écrit :

> Hi Frédéric,
> 
> I agree. Maybe we should re-frame things? Containers can run on
> bare-metal and containers can run virtualized. And distribution packages
> can run bare-metal and virtualized as well.
> 
> What about asking independently about:
> 
>  * Do you run containers or distribution packages?
>  * Do you run bare-metal or virtualized?
> 
> Best,
> Sebastian
> 
> Am 24.05.24 um 12:28 schrieb Frédéric Nass:
>> Hello everyone,
>>
>> Nice talk yesterday. :-)
>>
>> Regarding containers vs RPMs and orchestration, and the related discussion from
>> yesterday, I wanted to share a few things (which I wasn't able to share
>> yesterday on the call due to a headset/bluetooth stack issue) to explain why we
>> use cephadm and ceph orch these days with bare-metal clusters even though, as
>> someone said, cephadm was not supposed to work with (nor support) bare-metal
>> clusters (which actually surprised me since cephadm is all about managing
>> containers on a host, regardless of its type). I also think this explains the
>> observation that was made that half of the reports (iirc) are supposedly using
>> cephadm with bare-metal clusters.
>>
>> Over the years, we've deployed and managed bare-metal clusters with ceph-deploy
>> in Hammer, then switched to ceph-ansible (take-over-existing-cluster.yml) with
>> Jewel (or was it Luminous?), and then moved to cephadm, cephadm-ansible and
>> ceph-orch with Pacific, to manage the exact same bare-metal cluster. I guess
>> this explains why some bare-metal cluster today are managed using cephadm.
>> These are not new clusters deployed with Rook in K8s environments, but existing
>> bare-metal clusters that continue to servce brilliantly 10 years after
>> installation.
>>
>> Regarding rpms vs containers, as mentioned during the call, not sure why one
>> would still want to use rpms vs containers considering the simplicity and
>> velocity that containers offer regarding upgrades with ceph orch clever
>> automation. Some reported performance reasons between rpms vs containers,
>> meaning rpms binaries would perform better than containers. Is there any
>> evidence of that?
>>
>> Perhaps the reason why people still use RPMs is instead that they have invested
>> a lot of time and effort into developing automation tools/scripts/playbooks for
>> RPMs installations and they consider the transition to ceph orch and
>> containerized environments as a significant challenge.
>>
>> Regarding containerized Ceph, I remember asking Sage for a minimalist CephOS
>> back in 2018 (there was no containers by that time). IIRC, he said maintaining
>> a ceph-specific Linux distro would take too much time and resources, so it was
>> not something considered at that time. Now that Ceph is all containers, I
>> really hope that a minimalist rolling Ceph distro comes out one day. ceph orch
>> could even handle rare distro upgrades such as kernel upgrades as well as
>> ordered reboots. This would make ceph clusters really easier to maintain over
>> time (compared to the last complicated upgrade path from non-containerized
>> RHEL7+RHCS4.3 to containerized RHEL9+RHCS5.2 that we had to follow a year ago).
>>
>> Bests,
>> Frédéric.
>>
>> ----- Le 23 Mai 24, à 15:58, Laura Floreslflores@xxxxxxxxxx  a écrit :
>>
>>> Hi all,
>>>
>>> The meeting will be starting shortly! Join us at this link:
>>> https://meet.jit.si/ceph-user-dev-monthly
>>>
>>> - Laura
>>>
>>> On Wed, May 22, 2024 at 2:55 PM Laura Flores<lflores@xxxxxxxxxx>  wrote:
>>>
>>>> Hi all,
>>>>
>>>> The User + Dev Meetup will be held tomorrow at 10:00 AM EDT. We will be
>>>> discussing the results of the latest survey, and users who attend will have
>>>> the opportunity to provide additional feedback in real time.
>>>>
>>>> See you there!
>>>> Laura Flores
>>>>
>>>> Meeting Details:
>>>> https://www.meetup.com/ceph-user-group/events/300883526/
>>>>
>>>> --
>>>>
>>>> Laura Flores
>>>>
>>>> She/Her/Hers
>>>>
>>>> Software Engineer, Ceph Storage<https://ceph.io>
>>>>
>>>> Chicago, IL
>>>>
>>>> lflores@xxxxxxx  |lflores@xxxxxxxxxx  <lflores@xxxxxxxxxx>
>>>> M: +17087388804
>>>>
>>>>
>>>>
>>> --
>>>
>>> Laura Flores
>>>
>>> She/Her/Hers
>>>
>>> Software Engineer, Ceph Storage<https://ceph.io>
>>>
>>> Chicago, IL
>>>
>>> lflores@xxxxxxx  |lflores@xxxxxxxxxx  <lflores@xxxxxxxxxx>
>>> M: +17087388804
>>> _______________________________________________
>>> ceph-users mailing list --ceph-users@xxxxxxx
>>> To unsubscribe send an email toceph-users-leave@xxxxxxx
>> _______________________________________________
>> ceph-users mailing list --ceph-users@xxxxxxx
>> To unsubscribe send an email toceph-users-leave@xxxxxxx
> --
> Head of Software Development
> E-Mail: sebastian.wagner@xxxxxxxx
> 
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges, Andy Muthmann - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> 
> Web <https://croit.io/> | LinkedIn <http://linkedin.com/company/croit> |
> Youtube <https://www.youtube.com/channel/UCIJJSKVdcSLGLBtwSFx_epw> |
> Twitter <https://twitter.com/croit_io>
> 
> 
> TOP 100 Innovator Award Winner
> <https://croit.io/blog/croit-receives-top-100-seal> by compamedia
> Technology Fast50 Award
> <https://croit.io/blog/deloitte-technology-fast-50-award> Winner by Deloitte
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux