Hello everyone, After the upgrade from Pacific to Quincy the radosgw service is no longer listening on network port, but the process is running. I get the following in the log: 2022-12-29T02:07:35.641+0000 7f5df868ccc0 0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 36072 2022-12-29T02:07:35.641+0000 7f5df868ccc0 0 framework: civetweb 2022-12-29T02:07:35.641+0000 7f5df868ccc0 0 framework conf key: port, val: 443s 2022-12-29T02:07:35.641+0000 7f5df868ccc0 0 framework conf key: ssl_certificate, val: /etc/ssl/private/s3.arhont. com-bundle.pem 2022-12-29T02:07:35.641+0000 7f5df868ccc0 1 radosgw_Main not setting numa affinity 2022-12-29T02:07:35.645+0000 7f5df868ccc0 1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0 2022-12-29T02:07:35.645+0000 7f5df868ccc0 1 D3N datacache enabled: 0 2022-12-29T02:07:38.917+0000 7f5d15ffb700 -1 sync log trim: bool {anonymous}::sanity_check_endpoints(const DoutPre fixProvider*, rgw::sal::RadosStore*):688 WARNING: Cluster is is misconfigured! Zonegroup default (default) in Rea lm london-ldex ( 29474c50-f1c2-4155-ac3b-a42e9d413624) has no endpoints! 2022-12-29T02:07:38.917+0000 7f5d15ffb700 -1 sync log trim: bool {anonymous}::sanity_check_endpoints(const DoutPre fixProvider*, rgw::sal::RadosStore*):698 ERROR: Cluster is is misconfigured! Zone default (default) in Zonegroup default ( default) in Realm london-ldex ( 29474c50-f1c2-4155-ac3b-a42e9d413624) has no endpoints! Trimming is imp ossible. 2022-12-29T02:07:38.917+0000 7f5d15ffb700 -1 sync log trim: RGWCoroutine* create_meta_log_trim_cr(const DoutPrefixProvider*, rgw::sal::RadosStore*, RGWHTTPManager*, int, utime_t):718 ERROR: Cluster is is misconfigured! Refusing to trim. 2022-12-29T02:07:38.917+0000 7f5d15ffb700 -1 rgw rados thread: Bailing out of trim thread! 2022-12-29T02:07:38.917+0000 7f5d15ffb700 0 rgw rados thread: ERROR: processor->process() returned error r=-22 2022-12-29T02:07:38.953+0000 7f5df868ccc0 0 framework: beast 2022-12-29T02:07:38.953+0000 7f5df868ccc0 0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt 2022-12-29T02:07:38.953+0000 7f5df868ccc0 0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key 2022-12-29T02:07:38.953+0000 7f5df868ccc0 0 WARNING: skipping unknown framework: civetweb 2022-12-29T02:07:38.977+0000 7f5df868ccc0 1 mgrc service_daemon_register rgw.1371662715 metadata {arch=x86_64,ceph_release=quincy,ceph_version=ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable),ceph_version_short=17.2.5,cpu=Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz,distro=ubuntu,distro_description=Ubuntu 20.04.5 LTS,distro_version=20.04,frontend_config#0=civetweb port=443s ssl_certificate=/etc/ssl/private/s3.arhont.com-bundle.pem,frontend_type#0=civetweb,hostname=arh-ibstorage1-ib,id=radosgw1.gateway,kernel_description=#62~20.04.1-Ubuntu SMP Tue Nov 22 21:24:20 UTC 2022,kernel_version=5.15.0-56-generic,mem_swap_kb=24686688,mem_total_kb=98747048,num_handles=1,os=Linux,pid=36072,realm_id=29474c50-f1c2-4155-ac3b-a42e9d413624,realm_name=london-ldex,zone_id=default,zone_name=default,zonegroup_id=default,zonegroup_name=default} 2022-12-29T02:07:39.177+0000 7f5d057fa700 0 lifecycle: RGWLC::process() failed to acquire lock on lc.29, sleep 5, try again I have been running radosgw service on 15.2.x cluster previously without any issues. Last week I have upgraded the cluster to 16.2.x followed by a further upgrade to 17.2. Here is what my configuration file looks like: [client.radosgw1.gateway] host = arh-ibstorage1-ib keyring = /etc/ceph/keyring.radosgw1.gateway log_file = /var/log/ceph/radosgw.log rgw_dns_name = s3.arhont.com rgw_num_rados_handles = 8 rgw_thread_pool_size = 512 rgw_cache_enabled = true rgw cache lru size = 100000 rgw enable ops log = false rgw enable usage log = false rgw_frontends = civetweb port=443s ssl_certificate=/etc/ssl/private/s3.arhont.com-bundle.pem Please let me know how to fix the problem? Many thanks Andrei _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx