Re: 6 pgs not deep-scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is how it is set , if you suggest to make some changes please advises.

Thank you.


ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1407
flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application
mgr_devicehealth
pool 2 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 1393 flags
hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
1394 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
1395 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
1396 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
pool 6 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 108802 lfor
0/0/14812 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
        removed_snaps_queue
[22d7~3,11561~2,11571~1,11573~1c,11594~6,1159b~f,115b0~1,115b3~1,115c3~1,115f3~1,115f5~e,11613~6,1161f~c,11637~1b,11660~1,11663~2,11673~1,116d1~c,116f5~10,11721~c]
pool 7 'images' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 94609 flags
hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 8 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 1399 flags
hashpspool stripe_width 0 application rbd
pool 9 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins
pg_num 32 pgp_num 32 autoscale_mode on last_change 108783 lfor 0/561/559
flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
        removed_snaps_queue [3fa~1,3fc~3,400~1,402~1]
pool 10 'testbench' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 20931 lfor
0/20931/20929 flags hashpspool stripe_width 0


On Mon, Jan 29, 2024 at 2:09 PM Michel Niyoyita <micou12@xxxxxxxxx> wrote:

> Thank you Janne ,
>
> no need of setting some flags like ceph osd set nodeep-scrub  ???
>
> Thank you
>
> On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson <icepic.dz@xxxxxxxxx>
> wrote:
>
>> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita <micou12@xxxxxxxxx>:
>> >
>> > Thank you Frank ,
>> >
>> > All disks are HDDs . Would like to know if I can increase the number of
>> PGs
>> > live in production without a negative impact to the cluster. if yes
>> which
>> > commands to use .
>>
>> Yes. "ceph osd pool set <poolname> pg_num <number larger than before>"
>> where the number usually should be a power of two that leads to a
>> number of PGs per OSD between 100-200.
>>
>> --
>> May the most significant bit of your life be positive.
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux