On Friday, October 27, 2023 2:40:17 AM EDT Eugen Block wrote: > Are the issues you refer to the same as before? I don't think this > version issue is the root cause, I do see it as well in my test > cluster(s) but the rest works properly except for the tag issue I > already reported which you can easily fix by setting the config value > for the default image > (https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/LASBJCSPFGD > YAWPVE2YLV2ZLF3HC5SLS/#LASBJCSPFGDYAWPVE2YLV2ZLF3HC5SLS). Or are there new > issues you encountered? I concur. That `cephadm version` failure is/was a known issue but should not be the cause of any other issues. On the main branch `cephadm version` no longer fails this way - rather, it reports the version of a cephadm build and no longer inspects a container image. We can look into backporting this before the next reef release. The issue related to the container image tag that Eugen filed has also been fixed on reef. Thanks for filing that. Martin you may want to retry things after the next reef release. Unfortunately, I don't know when that is planned but I think it's soonish. > > Zitat von Martin Conway <martin.conway@xxxxxxxxxx>: > > I just had another look through the issues tracker and found this > > bug already listed. > > https://tracker.ceph.com/issues/59428 > > > > I need to go back to the other issues I am having and figure out if > > they are related or something different. > > > > > > Hi > > > > I wrote before about issues I was having with cephadm in 18.2.0 > > Sorry, I didn't see the helpful replies because my mail service > > binned the responses. > > > > I still can't get the reef version of cephadm to work properly. > > > > I had updated the system rpm to reef (ceph repo) and also upgraded > > the containerised ceph daemons to reef before my first email. > > > > Both the system package cephadm and the one found at > > /var/lib/ceph/${fsid}/cephadm.* return the same error when running > > "cephadm version" > > > > Traceback (most recent call last): > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 9468, in <module> > > > > main() > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 9456, in main > > > > r = ctx.func(ctx) > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 2108, in _infer_image > > > > ctx.image = infer_local_ceph_image(ctx, ctx.container_engine.path) > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 2191, in infer_local_ceph_image > > > > container_info = get_container_info(ctx, daemon, daemon_name is not > > None) > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 2154, in get_container_info > > > > matching_daemons = [d for d in daemons if daemon_name_or_type(d) > > > > == daemon_filter and d['fsid'] == ctx.fsid] > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 2154, in <listcomp> > > > > matching_daemons = [d for d in daemons if daemon_name_or_type(d) > > > > == daemon_filter and d['fsid'] == ctx.fsid] > > > > File > > > > "./cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4 > > e", line 217, in __getattr__ > > > > return super().__getattribute__(name) > > > > AttributeError: 'CephadmContext' object has no attribute 'fsid' > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx