Hi all, I just have done 15.2.7 installation with having 3 mon , 4 osd and 3 rgw, however after enabling rgw dashboard, the buckets page popup 500 internal error, but users and daemons are listed fine in the dashboard, radosgw-admin can list out everything without problem as well. I've tired to turn on rgw and mgr debug log the logs just shown as below, I would appreciate if you have any idea to fix this problem. Thank you! mgr log: debug 2020-12-07T07:48:32.761+0000 7f057a17d700 0 [dashboard ERROR request] [172.25.88.12:44024] [GET] [500] [0.050s] [admin] [513.0B] /api/rgw/bucket/test91 debug 2020-12-07T07:48:32.761+0000 7f057a17d700 0 [dashboard ERROR request] [b'{"status": "500 Internal Server Error", "detail": "The server encountered an unexpected condition which prevented it from fulfilling the request.", "request_id": "0b4cfe92-c25d-474f-ad51-05d441c260df"} rgw log: debug 2020-12-07T09:09:20.725+0000 7f2d16bdf700 1 ====== starting new request req=0x7f2d76d913a0 ===== debug 2020-12-07T09:09:20.725+0000 7f2d16bdf700 2 req 4313 0s initializing for trans_id = tx0000000000000000010d9-005fcdf140-40bb4-cast debug 2020-12-07T09:09:20.725+0000 7f2d16bdf700 2 req 4313 0s getting op 0 debug 2020-12-07T09:09:20.725+0000 7f2d16bdf700 2 req 4313 0s s3:list_bucket verifying requester debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket normalizing buckets and tenants debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket init permissions debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket recalculating target debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket reading permissions debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket init op debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket verifying op mask debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket verifying op permissions debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 req 4313 0.030002579s s3:list_bucket Searching permissions for identity=rgw::auth::SysReqApplier -> rgw::auth::RemoteApplier(acct_user=test91-tsa, acct_name=test91-tsa, perm_mask=15, is_admin=0) mask=49 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Searching permissions for uid=test91-tsa debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Found permission: 15 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Searching permissions for uid=test91-tsa$test91-tsa debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Permissions for user not found debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Searching permissions for group=1 mask=49 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Permissions for group not found debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Searching permissions for group=2 mask=49 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 Permissions for group not found debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 5 req 4313 0.030002579s s3:list_bucket -- Getting permissions done for identity=rgw::auth::SysReqApplier -> rgw::auth::RemoteApplier(acct_user=test91-tsa, acct_name=test91-tsa, perm_mask=15, is_admin=0), owner=test91-tsa, perm=1 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket verifying op params debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket pre-executing debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 2 req 4313 0.030002579s s3:list_bucket executing debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.20:6800/2627145462,v1:172.25.88.20:6801/2627145462] -- osd_op(unknown.0.0:36867 15.5 15:a0395ab0:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.0:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b1258ffb80 con 0x55b125517800 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.19:6800/4042281875,v1:172.25.88.19:6801/4042281875] -- osd_op(unknown.0.0:36868 15.4 15:2069ca14:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.1:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b126b0c780 con 0x55b125516400 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.17:6800/1165805227,v1:172.25.88.17:6801/1165805227] -- osd_op(unknown.0.0:36869 15.2 15:40c91296:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.2:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b129b04f00 con 0x55b1246e4800 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.17:6800/1165805227,v1:172.25.88.17:6801/1165805227] -- osd_op(unknown.0.0:36870 15.2 15:58086687:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.3:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b129b05680 con 0x55b1246e4800 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.17:6800/1165805227,v1:172.25.88.17:6801/1165805227] -- osd_op(unknown.0.0:36871 15.7 15:e9aebc19:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.4:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b127f5f180 con 0x55b1246e4800 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.17:6800/1165805227,v1:172.25.88.17:6801/1165805227] -- osd_op(unknown.0.0:36872 15.7 15:e0023539:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.5:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b1279f6f00 con 0x55b1246e4800 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.19:6800/4042281875,v1:172.25.88.19:6801/4042281875] -- osd_op(unknown.0.0:36873 15.1 15:9600557f:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.6:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b12b8ec280 con 0x55b125516400 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.19:6800/4042281875,v1:172.25.88.19:6801/4042281875] -- osd_op(unknown.0.0:36874 15.4 15:2984dfaf:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.7:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b12a320500 con 0x55b125516400 debug 2020-12-07T09:09:20.755+0000 7f2d16bdf700 1 -- 172.25.88.14:0/3384359887 --> [v2: 172.25.88.20:6800/2627145462,v1:172.25.88.20:6801/2627145462] -- osd_op(unknown.0.0:36875 15.5 15:b8046cce:::.dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.8:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e1209) v8 -- 0x55b129b20280 con 0x55b125517800 debug 2020-12-07T09:09:20.756+0000 7f2d65652700 1 -- 172.25.88.14:0/3384359887 <== osd.0 v2:172.25.88.17:6800/1165805227 7141 ==== osd_op_reply(36872 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.5 [call out=1507b] v0'0 uv369 ondisk = 0) v8 ==== 196+0+1507 (crc 0 0 0) 0x55b12b719b00 con 0x55b1246e4800 debug 2020-12-07T09:09:20.756+0000 7f2d65652700 1 -- 172.25.88.14:0/3384359887 <== osd.0 v2:172.25.88.17:6800/1165805227 7142 ==== osd_op_reply(36870 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.3 [call out=2033b] v0'0 uv293 ondisk = 0) v8 ==== 196+0+2033 (crc 0 0 0) 0x55b12b719b00 con 0x55b1246e4800 debug 2020-12-07T09:09:20.756+0000 7f2d65652700 1 -- 172.25.88.14:0/3384359887 <== osd.0 v2:172.25.88.17:6800/1165805227 7143 ==== osd_op_reply(36876 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.9 [call out=1626b] v0'0 uv371 ondisk = 0) v8 ==== 196+0+1626 (crc 0 0 0) 0x55b12b719b00 con 0x55b1246e4800 debug 2020-12-07T09:09:20.756+0000 7f2d65652700 1 -- 172.25.88.14:0/3384359887 <== osd.3 v2:172.25.88.20:6800/2627145462 6963 ==== osd_op_reply(36867 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.0 [call out=642b] v0'0 uv211 ondisk = 0) v8 ==== 196+0+642 (crc 0 0 0) 0x55b12b719b00 con 0x55b125517800 debug 2020-12-07T09:09:20.756+0000 7f2d65652700 1 -- 172.25.88.14:0/3384359887 <== osd.3 v2:172.25.88.20:6800/2627145462 6964 ==== osd_op_reply(36875 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.8 [call out=1610b] v0'0 uv213 ondisk = 0) v8 ==== 196+0+1610 (crc 0 0 0) 0x55b12b719b00 con 0x55b125517800 debug 2020-12-07T09:09:20.756+0000 7f2d64650700 1 -- 172.25.88.14:0/3384359887 <== osd.2 v2:172.25.88.19:6800/4042281875 10713 ==== osd_op_reply(36868 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.1 [call out=2931b] v0'0 uv199 ondisk = 0) v8 ==== 196+0+2931 (crc 0 0 0) 0x55b12b462fc0 con 0x55b125516400 debug 2020-12-07T09:09:20.757+0000 7f2d64650700 1 -- 172.25.88.14:0/3384359887 <== osd.2 v2:172.25.88.19:6800/4042281875 10714 ==== osd_op_reply(36873 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.6 [call out=149b] v0'0 uv213 ondisk = 0) v8 ==== 196+0+149 (crc 0 0 0) 0x55b12b462fc0 con 0x55b125516400 debug 2020-12-07T09:09:20.757+0000 7f2d64650700 1 -- 172.25.88.14:0/3384359887 <== osd.2 v2:172.25.88.19:6800/4042281875 10715 ==== osd_op_reply(36874 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.7 [call out=110b] v0'0 uv169 ondisk = 0) v8 ==== 196+0+110 (crc 0 0 0) 0x55b12b462fc0 con 0x55b125516400 debug 2020-12-07T09:09:20.757+0000 7f2d64650700 1 -- 172.25.88.14:0/3384359887 <== osd.2 v2:172.25.88.19:6800/4042281875 10716 ==== osd_op_reply(36877 .dir.4f3ad19b-c138-4c21-a860-df0df8841e27.179340.6.10 [call out=1052b] v0'0 uv215 ondisk = 0) v8 ==== 197+0+1052 (crc 0 0 0) 0x55b12b462fc0 con 0x55b125516400 debug 2020-12-07T09:09:20.757+0000 7f2d16bdf700 2 req 4313 0.032002751s s3:list_bucket completing debug 2020-12-07T09:09:20.757+0000 7f2d06bbf700 2 req 4313 0.032002751s s3:list_bucket op status=0 debug 2020-12-07T09:09:20.757+0000 7f2d06bbf700 2 req 4313 0.032002751s s3:list_bucket http status=200 debug 2020-12-07T09:09:20.757+0000 7f2d06bbf700 1 ====== req done req=0x7f2d76d913a0 op status=0 http_status=200 latency=0.032002751s ====== _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx