Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



pool 13 'mathfs_metadata' replicated size 2 min_size 2 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change

The problem is you have size=2 and min_size=2 on this pool. I would
increase the size of this pool to 3 (but i would also do that to all of
your pools which are size=2) the ok-to-stop command is failing because you
would drop below min_size by stopping any osd service this pg and those pgs
would then be inactive.

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Thu, May 26, 2022 at 2:22 PM Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxx>
wrote:

> On 5/26/22 14:09, Wesley Dillingham wrote:
> > What does "ceph osd pool ls detail" say?
>
> $ ceph osd pool ls detail
> pool 0 'rbd' replicated size 2 min_size 1 crush_rule 0 object_hash
> rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 44740 flags
> hashpspool,selfmanaged_snaps stripe_width 0 application rbd
> pool 1 '.rgw.root' replicated size 2 min_size 1 crush_rule 0 object_hash
> rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 44740 lfor
> 0/0/31483 owner 18446744073709551615 flags hashpspool stripe_width 0
> application rgw
> pool 2 'default.rgw.control' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31469 owner 18446744073709551615 flags hashpspool
> stripe_width 0 application rgw
> pool 3 'default.rgw.data.root' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31471 owner 18446744073709551615 flags hashpspool
> stripe_width 0 application rgw
> pool 4 'default.rgw.gc' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31471 owner 18446744073709551615 flags hashpspool
> stripe_width 0 application rgw
> pool 5 'default.rgw.log' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31387 owner 18446744073709551615 flags hashpspool
> stripe_width 0 application rgw
> pool 6 'default.rgw.users.uid' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31387 flags hashpspool stripe_width 0 application rgw
> pool 12 'mathfs_data' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/31370/31368 flags hashpspool stripe_width 0 application cephfs
> pool 13 'mathfs_metadata' replicated size 2 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/27164/27162 flags hashpspool stripe_width 0 application cephfs
> pool 15 'default.rgw.lc' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 44740 lfor 0/0/31374 flags hashpspool stripe_width 0 application rgw
> pool 21 'libvirt' replicated size 3 min_size 1 crush_rule 0 object_hash
> rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 56244 lfor
> 0/33144/33142 flags hashpspool,selfmanaged_snaps stripe_width 0
> application rbd
> pool 36 'monthly_archive_metadata' replicated size 2 min_size 1
> crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on
> last_change 45338 lfor 0/27845/27843 flags hashpspool stripe_width 0
> application cephfs
> pool 37 'monthly_archive_data' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 45334 lfor 0/44535/44533 flags hashpspool stripe_width 0 application cephfs
> pool 38 'device_health_metrics' replicated size 2 min_size 1 crush_rule
> 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change
> 56507 flags hashpspool stripe_width 0 pg_num_min 1 application
> mgr_devicehealth
> pool 41 'lensfun_metadata' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 54066 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16
> recovery_priority 5 application cephfs
> pool 42 'lensfun_data' replicated size 2 min_size 1 crush_rule 0
> object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
> 54066 flags hashpspool stripe_width 0 application cephfs
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux