Re: [Suspicious newsletter] Issue with Nautilus upgrade from Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've just made update this week also but mine required jewel at least. Hadn't it notify about that before?

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Suresh Rama <sstkadu@xxxxxxxxx>
Sent: Friday, July 9, 2021 9:25 AM
To: ceph-users <ceph-users@xxxxxxx>
Subject: [Suspicious newsletter]  Issue with Nautilus upgrade from Luminous

Dear All,

We have 13  Ceph clusters and we started upgrading one by one from Luminous to Nautilus. Post upgrade started fixing the warning alerts and had issues setting "*ceph config set mon mon_crush_min_required_version firefly" *yielded no results.  Updated the mon config and restart the daemons the warning didn't go away

I have also tried to set it to hammer and no use.  The warning is still there.  Do you have any recommendations?  I thought of changing it to hammer so I can use straw2 but I was stuck with warning message.  I have also bounced the nodes and the issue remains the same.

Please review and share your inputs.

  cluster:
    id:     xxxxxxxxxxx
    health: HEALTH_WARN
            crush map has legacy tunables (require firefly, min is hammer)
            1 pools have many more objects per pg than average
            15252 pgs not deep-scrubbed in time
            21399 pgs not scrubbed in time
            clients are using insecure global_id reclaim
            mons are allowing insecure global_id reclaim
            3 monitors have not enabled msgr2


ceph daemon mon.$(hostname -s) config show |grep -i mon_crush_min_required_version
    "mon_crush_min_required_version": "firefly",

ceph osd crush show-tunables
{
    "choose_local_tries": 0,
    "choose_local_fallback_tries": 0,
    "choose_total_tries": 50,
    "chooseleaf_descend_once": 1,
    "chooseleaf_vary_r": 1,
    "chooseleaf_stable": 0,
    "straw_calc_version": 1,
    "allowed_bucket_algs": 22,
    "profile": "firefly",
    "optimal_tunables": 0,
    "legacy_tunables": 0,
    "minimum_required_version": "firefly",
    "require_feature_tunables": 1,
    "require_feature_tunables2": 1,
    "has_v2_rules": 0,
    "require_feature_tunables3": 1,
    "has_v3_rules": 0,
    "has_v4_buckets": 0,
    "require_feature_tunables5": 0,
    "has_v5_rules": 0
}

ceph config dump
WHO   MASK LEVEL    OPTION                         VALUE   RO
  mon      advanced mon_crush_min_required_version firefly *

ceph versions
{
    "mon": {
        "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
nautilus (stable)": 549,
        "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)": 226
    },
    "mds": {},
    "rgw": {
        "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)": 2
    },
    "overall": {
        "ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
nautilus (stable)": 549,
        "ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)": 234
    }
}

ceph -s
  cluster:
    id:    xxxxxxxxxxxxxxxxxx
    health: HEALTH_WARN
            crush map has legacy tunables (require firefly, min is hammer)
            1 pools have many more objects per pg than average
            13811 pgs not deep-scrubbed in time
            19994 pgs not scrubbed in time
            clients are using insecure global_id reclaim
            mons are allowing insecure global_id reclaim
            3 monitors have not enabled msgr2

  services:
    mon: 3 daemons, quorum
pistoremon-ho-c01,pistoremon-ho-c02,pistoremon-ho-c03 (age 24s)
    mgr: pistoremon-ho-c02(active, since 2m), standbys: pistoremon-ho-c01,
pistoremon-ho-c03
    osd: 800 osds: 775 up (since 105m), 775 in
    rgw: 2 daemons active (pistorergw-ho-c01, pistorergw-ho-c02)

  task status:

  data:
    pools:   28 pools, 27336 pgs
    objects: 107.19M objects, 428 TiB
    usage:   1.3 PiB used, 1.5 PiB / 2.8 PiB avail
    pgs:     27177 active+clean
             142   active+clean+scrubbing+deep
             17    active+clean+scrubbing

  io:
    client:   220 MiB/s rd, 1.9 GiB/s wr, 7.07k op/s rd, 25.42k op/s wr

--
Regards,
Suresh
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux