Re: Experimental upgrade of a Cephadm-managed Squid cluster to Ubuntu Noble (walk-through and RFC)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Probably this current behaviour (of not disabling the whole ceph-target)
when entering the maintenance node is not correct as the whole node is
affected (and any cluster(s) running on the same).

I'll raise this in the next cephadm weekly and see what the team thinks.

On Thu, Jan 2, 2025 at 5:22 PM Florian Haas <florian.haas@xxxxxxxxxx> wrote:

> On 02/01/2025 16:37, Redouane Kachach wrote:
> > Just to comment on the ceph.target. Technically in a containerized ceph a
> > node can host daemons from *many ceph clusters* (each with its own
> > ceph_fsid).
> >
> > The ceph.target is a global unit and it's the root for all the clusters
> > running in the node. There's another target which is specific to
> > each cluster (ceph-<fsid>.target). From my testing env where I created
> two
> > clusters and I forced maintenance mode for the first one only:
> >
> > [root@ceph-node-2 ~]# systemctl list-dependencies ceph.target
> > ceph.target
> > ○ ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8.target
> > ○ │
> >
> ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8@xxxxxxxxxxxxxxxxxx-node-2.service
> > ○ │ ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8@xxxxxxxxxx-node-2.service
> > × │
> >
> ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8@xxxxxxxx-node-2.ptlcoi.service
> > ○ │ ├─ceph-789c5638-bec0-11ef-9350-5254002ff0d8@xxxxxxxx-node-2.service
> > × │
> >
> └─ceph-789c5638-bec0-11ef-9350-5254002ff0d8@xxxxxxxxxxxxxxxxxx-node-2.service
> > ● └─ceph-a3cf42a0-becc-11ef-9470-52540012a496.target
> > ●
> >
> ├─ceph-a3cf42a0-becc-11ef-9470-52540012a496@xxxxxxxxxxxxxxxxxx-node-2.service
> > ●   ├─ceph-a3cf42a0-becc-11ef-9470-52540012a496@xxxxxxxxxx-node-2.service
> > ●
> >
> ├─ceph-a3cf42a0-becc-11ef-9470-52540012a496@xxxxxxxx-node-2.bodyuz.service
> > ●   ├─ceph-a3cf42a0-becc-11ef-9470-52540012a496@xxxxxxxx-node-2.service
> > ●
> >
> └─ceph-a3cf42a0-becc-11ef-9470-52540012a496@xxxxxxxxxxxxxxxxxx-node-2.service
> >
> > *Global target:*
> > [root@ceph-node-2 ~]# systemctl is-active ceph.target
> > active
> >
> > *First cluster:*
> >> systemctl is-active ceph-789c5638-bec0-11ef-9350-5254002ff0d8.target
> > inactive
> >
> > *Second cluster:*
> >> systemctl is-active ceph-a3cf42a0-becc-11ef-9470-52540012a496.target
> > active
> >
>
> Right, so in my view that's one more reason *not* to use maintenance
> mode in a distro upgrade, since stopping ceph.target ensures that all
> Ceph-related services are stopped on a node, even in the — somewhat
> uncommon — case of that node running services related to multiple
> clusters. Wouldn't you agree?
>
> Cheers,
> Florian
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux