Hi Cassiano,
Thanks for your valuable feedback and will wait for
some time till new osd sync get complete. Also for by
increasing pg count it is the issue will solve? our setup pool
size for data and metadata pg number is 250. Is this correct
for 7 OSD with 2 replica. Also currently stored data size is
17TB.
ceph osd df
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 3.29749 1.00000 3376G 2814G 562G 83.35 1.23 165
1 3.26869 1.00000 3347G 1923G 1423G 57.48 0.85 152
2 3.27339 1.00000 3351G 1980G 1371G 59.10 0.88 161
3 3.24089 1.00000 3318G 2131G 1187G 64.23 0.95 168
4 3.24089 1.00000 3318G 2998G 319G 90.36 1.34 176
5 3.32669 1.00000 3406G 2476G 930G 72.68 1.08 165
6 3.27800 1.00000 3356G 1518G 1838G 45.24 0.67 166
TOTAL 23476G 15843G 7632G 67.49
MIN/MAX VAR: 0.67/1.34 STDDEV: 14.53
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT
PRIMARY-AFFINITY
-1 22.92604 root
default
-2 3.29749 host
intcfs-osd1
0 3.29749 osd.0 up 1.00000
1.00000
-3 3.26869 host
intcfs-osd2
1 3.26869 osd.1 up 1.00000
1.00000
-4 3.27339 host
intcfs-osd3
2 3.27339 osd.2 up 1.00000
1.00000
-5 3.24089 host
intcfs-osd4
3 3.24089 osd.3 up 1.00000
1.00000
-6 3.24089 host
intcfs-osd5
4 3.24089 osd.4 up 1.00000
1.00000
-7 3.32669 host
intcfs-osd6
5 3.32669 osd.5 up 1.00000
1.00000
-8 3.27800 host
intcfs-osd7
6 3.27800 osd.6 up 1.00000
1.00000
ceph osd pool ls
detail
pool 0 'rbd' replicated
size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 3 'downloads_data'
replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 250 pgp_num 250 last_change 39 flags
hashpspool crash_replay_interval 45 stripe_width 0
pool 4 'downloads_metadata'
replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 250 pgp_num 250 last_change 36 flags
hashpspool stripe_width 0
Regards
Prabu GJ