Re: Issue with Nautilus upgrade from Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dominic,  All,

After going through the errors in detail and looking through "ceph
features",  I have set *ceph osd set-require-min-compat-client luminous* and
cleared the warning.  I have fixed the remaining warning too and the
cluster is healthy.

Thank you everyone for taking time to respond.

Regards,
suresh

On Fri, Jul 9, 2021 at 3:37 PM <DHilsbos@xxxxxxxxxxxxxx> wrote:

> Suresh;
>
> I don't believe we use tunables, so I'm not terribly familiar with them.
>
> A quick Google search ("ceph tunable") supplied the following pages:
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/storage_strategies/crush_tunables
> https://docs.ceph.com/en/latest/rados/operations/crush-map/#tunables
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Vice President - Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
> -----Original Message-----
> From: Suresh Rama [mailto:sstkadu@xxxxxxxxx]
> Sent: Thursday, July 8, 2021 7:25 PM
> To: ceph-users
> Subject:  Issue with Nautilus upgrade from Luminous
>
> Dear All,
>
> We have 13  Ceph clusters and we started upgrading one by one from Luminous
> to Nautilus. Post upgrade started fixing the warning alerts and had issues
> setting "*ceph config set mon mon_crush_min_required_version firefly"
> *yielded
> no results.  Updated the mon config and restart the daemons the warning
> didn't go away
>
> I have also tried to set it to hammer and no use.  The warning is still
> there.  Do you have any recommendations?  I thought of changing it to
> hammer so I can use straw2 but I was stuck with warning message.  I have
> also bounced the nodes and the issue remains the same.
>
> Please review and share your inputs.
>
>   cluster:
>     id:     xxxxxxxxxxx
>     health: HEALTH_WARN
>             crush map has legacy tunables (require firefly, min is hammer)
>             1 pools have many more objects per pg than average
>             15252 pgs not deep-scrubbed in time
>             21399 pgs not scrubbed in time
>             clients are using insecure global_id reclaim
>             mons are allowing insecure global_id reclaim
>             3 monitors have not enabled msgr2
>
>
> ceph daemon mon.$(hostname -s) config show |grep -i
> mon_crush_min_required_version
>     "mon_crush_min_required_version": "firefly",
>
> ceph osd crush show-tunables
> {
>     "choose_local_tries": 0,
>     "choose_local_fallback_tries": 0,
>     "choose_total_tries": 50,
>     "chooseleaf_descend_once": 1,
>     "chooseleaf_vary_r": 1,
>     "chooseleaf_stable": 0,
>     "straw_calc_version": 1,
>     "allowed_bucket_algs": 22,
>     "profile": "firefly",
>     "optimal_tunables": 0,
>     "legacy_tunables": 0,
>     "minimum_required_version": "firefly",
>     "require_feature_tunables": 1,
>     "require_feature_tunables2": 1,
>     "has_v2_rules": 0,
>     "require_feature_tunables3": 1,
>     "has_v3_rules": 0,
>     "has_v4_buckets": 0,
>     "require_feature_tunables5": 0,
>     "has_v5_rules": 0
> }
>
> ceph config dump
> WHO   MASK LEVEL    OPTION                         VALUE   RO
>   mon      advanced mon_crush_min_required_version firefly *
>
> ceph versions
> {
>     "mon": {
>         "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
> nautilus (stable)": 3
>     },
>     "mgr": {
>         "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
> nautilus (stable)": 3
>     },
>     "osd": {
>         "ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
> nautilus (stable)": 549,
>         "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
> nautilus (stable)": 226
>     },
>     "mds": {},
>     "rgw": {
>         "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
> nautilus (stable)": 2
>     },
>     "overall": {
>         "ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
> nautilus (stable)": 549,
>         "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
> nautilus (stable)": 234
>     }
> }
>
> ceph -s
>   cluster:
>     id:    xxxxxxxxxxxxxxxxxx
>     health: HEALTH_WARN
>             crush map has legacy tunables (require firefly, min is hammer)
>             1 pools have many more objects per pg than average
>             13811 pgs not deep-scrubbed in time
>             19994 pgs not scrubbed in time
>             clients are using insecure global_id reclaim
>             mons are allowing insecure global_id reclaim
>             3 monitors have not enabled msgr2
>
>   services:
>     mon: 3 daemons, quorum
> pistoremon-ho-c01,pistoremon-ho-c02,pistoremon-ho-c03 (age 24s)
>     mgr: pistoremon-ho-c02(active, since 2m), standbys: pistoremon-ho-c01,
> pistoremon-ho-c03
>     osd: 800 osds: 775 up (since 105m), 775 in
>     rgw: 2 daemons active (pistorergw-ho-c01, pistorergw-ho-c02)
>
>   task status:
>
>   data:
>     pools:   28 pools, 27336 pgs
>     objects: 107.19M objects, 428 TiB
>     usage:   1.3 PiB used, 1.5 PiB / 2.8 PiB avail
>     pgs:     27177 active+clean
>              142   active+clean+scrubbing+deep
>              17    active+clean+scrubbing
>
>   io:
>     client:   220 MiB/s rd, 1.9 GiB/s wr, 7.07k op/s rd, 25.42k op/s wr
>
> --
> Regards,
> Suresh
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 
Regards,
Suresh
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux