My knowledge of AppArmor and Ubuntu is too limited to give a qualified
answer, I'll leave it for others to respond.
Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
Apparmor profiles for Ceph are apparently very limited on Ubuntu.
On hvs001 (not misbehaving host) with services osd's, mgr, prometheus, mon
- /bin/prometheus (93021) docker-default
- /usr/bin/ceph-mgr (3574755) docker-default
- /bin/alertmanager (3578797) docker-default
- /usr/bin/ceph-mds (4051355) docker-default
- /bin/node_exporter (4068210) docker-default
On hvs004 (misbehaving host) with services osd's, Grafana
- /bin/node_exporter (5389) docker-default
- /usr/share/grafana/bin/grafana (854839) docker-default
None of the hosts have ceph-profiles in /etc/apparmor.d. Are the
apparmor profiles created on initialization of each service?
I don't have any apparmor="DENIED" in syslog...
I'v read David Orman's mail, but one bug report relates to podman
and the other to runc. I'm not sure (not convinced) if I'm affected
to one of these bugs.
I've disabled apparmor on the misbehaving host, reset the host and
it seems the issue is resolved...
I'll keep the cluster running like this for now, just to make sure
all stays well. Then I'll re-enable apparmor and if the issue
reappears, I'll submit a bug report on Ubuntu.
@Eugen and @David, thanks for the input!
-----Oorspronkelijk bericht-----
Van: Eugen Block <eblock@xxxxxx>
Verzonden: woensdag 16 oktober 2024 17:24
Aan: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
CC: ceph-users@xxxxxxx
Onderwerp: Re: Re: Ubuntu 24.02 LTS Ceph status warning
Is apparmor configured differently on those hosts? Or is it running only on
the misbehaving host?
Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
> 'ceph config get mgr container_image' gives
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
h
>
Ac2hhMjU2OjIwMDA4N2MzNTgxMWJmMjhlOGE4MDczYjE1ZmE4NmMwN2
NjZTg1YzU3NWYxY
>
2NkNjJkMWQ2ZGRiZmRjNjc3MGE=/C18099323945AF9F4A1D330A5B5F56543
074169706
> 2B694BDF90785FC96D992F?c=1&i=1&docs=1 => OK
>
> 'ceph health detail' gives
> HEALTH_WARN failed to probe daemons or devices [WRN]
> CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
> host hvs004 `cephadm gather-facts` failed: cephadm exited with an
> error code: 1, stderr: Traceback (most recent call last):
> File "<frozen runpy>", line 198, in _run_module_as_main
> File "<frozen runpy>", line 88, in _run_code
> File
> "/var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
d1b32ae31585/cephadm.a58127a8ee
>
d242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.p
y", line 5579, in <module>
> ...
> File
> "/var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
d1b32ae31585/cephadm.a58127a8ee
>
d242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib
/host
> _facts.py", line 722, in _fetch_apparmor
> ValueError: too many values to unpack (expected 2)
> host hvs004 `cephadm ceph-volume` failed: cephadm exited with an
> error code: 1, stderr: Inferring config
> /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-d1b32ae31585/config/ceph.conf
> Traceback (most recent call last):
> File "<frozen runpy>", line 198, in _run_module_as_main
> File "<frozen runpy>", line 88, in _run_code
> File
> "/var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
d1b32ae31585/cephadm.a58127a8ee
>
d242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/__main__.p
y", line 5579, in <module>
> ...
> File
> "/var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
d1b32ae31585/cephadm.a58127a8ee
>
d242cae13849ddbebcb9931d7a5410f406f2d264e3b1ed31d9605e/cephadmlib
/host
> _facts.py", line 722, in _fetch_apparmor
> ValueError: too many values to unpack (expected 2)
>
> I do think it's a ceph version issue, so I started to compaire the
> hvs004 host with a good behaving host hvs001. I did find this:
> 'root@hvs001~#:cephadm shell ceph -v' gives ceph version 19.2.0
> (16063ff2022298c9300e49a547a16ffda59baf13) squid (stable)
> 'root@hvs004~#:cephadm shell ceph -v' gives ceph version
> 19.3.0-5346-gcc481a63
> (cc481a63bc03a534cb8e2e961293d6509ba59401) squid (dev)
>
> It seems only the shell uses a wrong docker image so I took a list
> from both of them:
> hvs0001
> ----------
> REPOSITORY TAG IMAGE ID CREATED
> SIZE
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v19 37996728e013 2 weeks
> ago 1.28GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v18.2 2bc0b0f4375d 2 months
> ago 1.22GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v18 a27483cc3ea0 6 months
> ago 1.26GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v17 5a04c8b3735d 9 months
> ago 1.27GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
gtZ3JhZmFuYQ==/1111DCD4A88C0C2E0223375011289337FCA5859049F048BFA
1D957A6F9437591?c=1&i=1&docs=1 9.4.7 954c08fa6188 10
> months ago 633MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2dyY
WZhbmE=/68B473B21D47C3F07F8461CE79C63FB5C33025F9FB2344B554A38F2
29177F32D?c=1&i=1&docs=1 9.4.12 2bacad6d85d8 17
> months ago 330MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL3Byb21ldGhldXM=/2CD01AE3BF4B76D6045EB8CFAB7D5AB4D2E1D6E4E4
D24A14F5B7061F3A23CDBC?c=1&i=1&docs=1 v2.43.0 a07b618ecd1d 19
> months ago 234MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL2FsZXJ0bWFuYWdlcg==/4DB820C3D138493D338A2DFD7075A018E5D6DC
2F30EDE4E347B1A7E3B356AD13?c=1&i=1&docs=1 v0.25.0 c8568f914cd2 22
> months ago 65.1MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL25vZGUtZXhwb3J0ZXI=/A423BD4C42A7129B12B8D543C456F8086899136
A61C0BEDF4814BDFD0E8F9323?c=1&i=1&docs=1 v1.5.0 0da6a335fe13 22
> months ago 22.5MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v17.2 0912465dcea5 2 years
> ago 1.34GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v17.2.3 0912465dcea5 2 years
> ago 1.34GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
gtZ3JhZmFuYQ==/1111DCD4A88C0C2E0223375011289337FCA5859049F048BFA
1D957A6F9437591?c=1&i=1&docs=1 8.3.5 dad864ee21e9 2 years
> ago 558MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5jZXBoLmlvL2NlcGg
tY2kvY2VwaA==/5B87D1E443C50F1093DBB1CEB6CA0111A93277F371E38F7CF
E2007DA95A2BD1A?c=1&i=1&docs=1 master c5ce177c6a5d 2 years
> ago 1.38GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL3Byb21ldGhldXM=/2CD01AE3BF4B76D6045EB8CFAB7D5AB4D2E1D6E4E4
D24A14F5B7061F3A23CDBC?c=1&i=1&docs=1 v2.33.4 514e6a882f6e 2
years
> ago 204MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL25vZGUtZXhwb3J0ZXI=/A423BD4C42A7129B12B8D543C456F8086899136
A61C0BEDF4814BDFD0E8F9323?c=1&i=1&docs=1 v1.3.1 1dbe0e931976 2
years
> ago 20.9MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL2FsZXJ0bWFuYWdlcg==/4DB820C3D138493D338A2DFD7075A018E5D6DC
2F30EDE4E347B1A7E3B356AD13?c=1&i=1&docs=1 v0.23.0 ba2b418f427c 3
years
> ago 57.5MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
gtZ3JhZmFuYQ==/1111DCD4A88C0C2E0223375011289337FCA5859049F048BFA
1D957A6F9437591?c=1&i=1&docs=1 6.7.4 557c83e11646 3 years
> ago 486MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL3Byb21ldGhldXM=/2CD01AE3BF4B76D6045EB8CFAB7D5AB4D2E1D6E4E4
D24A14F5B7061F3A23CDBC?c=1&i=1&docs=1 v2.18.1 de242295e225 4
years
> ago 140MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL2FsZXJ0bWFuYWdlcg==/4DB820C3D138493D338A2DFD7075A018E5D6DC
2F30EDE4E347B1A7E3B356AD13?c=1&i=1&docs=1 v0.20.0 0881eb8f169f 4
years
> ago 52.1MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL25vZGUtZXhwb3J0ZXI=/A423BD4C42A7129B12B8D543C456F8086899136
A61C0BEDF4814BDFD0E8F9323?c=1&i=1&docs=1 v0.18.1 e5a616e4b9cf 5
years
> ago 22.9MB
>
> hvs004
> ---------
> REPOSITORY TAG IMAGE ID CREATED
> SIZE
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5jZXBoLmlvL2NlcGg
tY2kvY2VwaA==/5B87D1E443C50F1093DBB1CEB6CA0111A93277F371E38F7CF
E2007DA95A2BD1A?c=1&i=1&docs=1 main 6e76ca06f33a 11 days
> ago 1.41GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 <none> 37996728e013 2 weeks
> ago 1.28GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9jZXBoL2NlcG
g=/ADFE53E9DD9FA29720D92941A564893CB2CFA88EEE099C9A09763A6628C6
F3BD?c=1&i=1&docs=1 v17 9cea3956c04b 18
> months ago 1.16GB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL25vZGUtZXhwb3J0ZXI=/A423BD4C42A7129B12B8D543C456F8086899136
A61C0BEDF4814BDFD0E8F9323?c=1&i=1&docs=1 v1.5.0 0da6a335fe13 22
> months ago 22.5MB
>
https://lsems.gravityzone.bitdefender.com/scan/cXVheS5pby9wcm9tZXRoZ
XVzL25vZGUtZXhwb3J0ZXI=/A423BD4C42A7129B12B8D543C456F8086899136
A61C0BEDF4814BDFD0E8F9323?c=1&i=1&docs=1 v1.3.1 1dbe0e931976 2
years
> ago 20.9MB
>
> I pulled on hvs004 the v19 tagged image and my cephadm shell ceph -v
> gave the correct version.
>
> It seems my docker images aren't automatically managed by ceph?
>
> Can I fix this, or do I have to pull the correct images and remove the
> wrong ones myself?
>
>
>> -----Oorspronkelijk bericht-----
>> Van: Eugen Block <eblock@xxxxxx>
>> Verzonden: vrijdag 11 oktober 2024 13:03
>> Aan: ceph-users@xxxxxxx
>> Onderwerp: Re: Ubuntu 24.02 LTS Ceph status warning
>>
>> I don't think the warning is related to a specific ceph version. The
>> orchestrator uses the default image anyway, you can get it via:
>>
>> ceph config get mgr container_image
>>
>> 'ceph health detail' should reveal which host or daemons misbehaves.
>> I would then look into cephadm.log on that host to find more hints,
>> what exactly goes wrong. You should also look into the active MGR
>> log, it could also give you hints why that service fails.
>>
>> Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
>>
>> > I manage a 4 hosts cluster on Ubuntu 22.04 LTS with ceph installed
>> > trough cephad and containers on Docker.
>> >
>> > Last month, I've migrated to the latest Ceph 19.2. All went great.
>> >
>> > Last week I've upgraded one of my hosts to Ubuntu 24.04.1 LTS Now I
>> > get the following warning in cephadm shell -- ceph status:
>> > Failed to apply 1 service(s): osd.all-available-devices failed to
>> > probe daemons or devices
>> >
>> > Outside the ceph shell:
>> > Ceph -v results in 'ceph version 19.2.0~git20240301.4c76c50
>> > (4c76c50a73f63ba48ccdf0adccce03b00d1d80c7) squid (dev)'
>> >
>> > Inside the shell: 'ceph version 19.3.0-5346-gcc481a63
>> > (cc481a63bc03a534cb8e2e961293d6509ba59401) squid (dev)'
>> > All osd's, mon's , mgr's and mds's are on 19.2.0` (image id
>> > 37996728e013)
>> >
>> > Do I get the warning because the Ubuntu package of ceph is still on
>> > a development version?
>> > Or can I have another underling problem?
>> >
>> > Thanks for the help.
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
>> > an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx