Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ceph df detail:
[root@k8s-1 ~]# ceph df detail
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    600 GiB  600 GiB  157 MiB   157 MiB       0.03
TOTAL  600 GiB  600 GiB  157 MiB   157 MiB       0.03
 
--- POOLS ---
POOL                                    ID  PGS   STORED   (DATA)  (OMAP)  OBJECTS     USED   (DATA)  (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
device_health_metrics                    1    1      0 B      0 B     0 B        0      0 B      0 B     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.buckets.index    2  230      0 B      0 B     0 B       11      0 B      0 B     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.control          3    8      0 B      0 B     0 B        8      0 B      0 B     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.log              4    8  3.7 KiB  3.7 KiB     0 B      180  420 KiB  420 KiB     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.meta             5    8  1.9 KiB  1.9 KiB     0 B        7   72 KiB   72 KiB     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
.rgw.root                                6    8  4.9 KiB  4.9 KiB     0 B       16  180 KiB  180 KiB     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.buckets.non-ec   7    8      0 B      0 B     0 B        0      0 B      0 B     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.otp              8    8      0 B      0 B     0 B        0      0 B      0 B     0 B      0    158 GiB            N/A          N/A    N/A         0 B          0 B
os-gwbtwlltuklwxrpl.rgw.buckets.data     9   32      0 B      0 B     0 B        0      0 B      0 B     0 B      0    317 GiB            N/A          N/A    N/A         0 B          0 B

ceph osd pool ls detail:
[root@k8s-1 ~]# ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 25 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr_devicehealth
pool 2 'os-gwbtwlltuklwxrpl.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 5 object_hash rjenkins pg_num 217 pgp_num 209 pg_num_target 32 pgp_num_target 32 autoscale_mode off last_change 322 lfor 0/322/320 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 3 'os-gwbtwlltuklwxrpl.rgw.control' replicated size 3 min_size 2 crush_rule 7 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 4 'os-gwbtwlltuklwxrpl.rgw.log' replicated size 3 min_size 2 crush_rule 6 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 5 'os-gwbtwlltuklwxrpl.rgw.meta' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 6 '.rgw.root' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 7 'os-gwbtwlltuklwxrpl.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 8 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 8 'os-gwbtwlltuklwxrpl.rgw.otp' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 compression_mode none pg_num_min 8 target_size_ratio 0.5 application rook-ceph-rgw
pool 9 'os-gwbtwlltuklwxrpl.rgw.buckets.data' erasure profile os-gwbtwlltuklwxrpl.rgw.buckets.data_ecprofile size 3 min_size 2 crush_rule 9 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 70 flags hashpspool,ec_overwrites stripe_width 8192 compression_mode none target_size_ratio 0.5 application rook-ceph-rgw

I tested under a smaller ceph cluster with 6 osds, and set the pg num of the index pool to 256, the reduce it to 32, it still slowly.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux