check the ops for rgw: [root@node06 ceph]# ceph daemon /var/run/ceph/ceph-client.rgw.os.dsglczutvqsgowpz.a.13.93908447458760.asok objecter_requests| jq ".ops" | jq 'length' 8 list subdir with s5cmd: [root@node01 deeproute]# time ./s5cmd --endpoint-url=http://10.x.x.x:80 ls s3://mlp-data-warehouse/ads_prediction/ DIR prediction_scenes/ DIR test_pai/ real 0m1.125s user 0m0.007s sys 0m0.016s after the ops increase: [root@node06 ceph]# ceph daemon /var/run/ceph/ceph-client.rgw.os.dsglczutvqsgowpz.a.13.93908447458760.asok objecter_requests| jq ".ops" | jq 'length' 264 list subdir with s5cmd: [root@node01 deeproute]# time ./s5cmd --endpoint-url=http://10.x.x.x:80 ls s3://mlp-data-warehouse/ads_prediction/ DIR prediction_scenes/ DIR test_pai/ real 0m8.822s user 0m0.004s sys 0m0.019s and if the ops increase to more 2000, it needs more than 100s to list the subdir, why? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx