Re: Cannot list RBDs in any pool / cannot mount any RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

the client is using this version:
root@ld3955:~# ceph versions
{
    "mon": {
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 161,
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 277
    },
    "mds": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 2
    },
    "overall": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 163,
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 283
    }
}

The server is using this version:
root@ld5505:~# ceph versions
{
    "mon": {
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 161,
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 277
    },
    "mds": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 2
    },
    "overall": {
        "ceph version 14.2.4 (65249672c6e6d843510e7e01f8a4b976dcac3db1)
nautilus (stable)": 163,
        "ceph version 14.2.4-1-gd592e56
(d592e56e74d94c6a05b9240fcb0031868acefbab) nautilus (stable)": 283
    }
}

There's no issue with timesync though.



Am 15.11.2019 um 13:22 schrieb Wido den Hollander:
>
> On 11/15/19 11:38 AM, Thomas Schneider wrote:
>> Hi,
>>
>> when I execute this command
>> rbd ls -l <pool-name>
>> to list all RBDs I get spamming errors:
>>
> Those errors are weird. Can you share the Ceph cluster version and the
> clients?
>
> $ ceph versions
>
> And then also use rpm/dpkg to check which version of Ceph runs on the
> client.
>
> Are you also sure the time is in sync on the client?
>
> Wido
>
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Sender did not set
>> CEPH_MSG_FOOTER_SIGNED.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Message signature
>> does not match contents.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Signature on message:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1    sig: 0
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Locally calculated
>> signature:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1   
>> sig_check:6229148783016323662
>> 2019-11-15 11:29:19.428 7fd852678700  0 Signature failed.
>> 2019-11-15 11:29:19.428 7fd852678700  0 --1- 10.97.206.91:0/2841811017
>>>> v1:10.97.206.97:6884/265976 conn(0x7fd834090770 0x7fd83408c190 :-1
>> s=READ_FOOTER_AND_DISPATCH pgs=42068 cs=1 l=1).handle_message_footer
>> Signature check failed
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Sender did not set
>> CEPH_MSG_FOOTER_SIGNED.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Message signature
>> does not match contents.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Signature on message:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1    sig: 0
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Locally calculated
>> signature:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1   
>> sig_check:4639667068846516939
>> 2019-11-15 11:29:19.428 7fd852678700  0 Signature failed.
>> 2019-11-15 11:29:19.428 7fd852678700  0 --1- 10.97.206.91:0/2841811017
>>>> v1:10.97.206.97:6884/265976 conn(0x7fd83408c990 0x7fd83408b160 :-1
>> s=READ_FOOTER_AND_DISPATCH pgs=42069 cs=1 l=1).handle_message_footer
>> Signature check failed
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Sender did not set
>> CEPH_MSG_FOOTER_SIGNED.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1 Message signature
>> does not match contents.
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Signature on message:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1    sig: 0
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1Locally calculated
>> signature:
>> 2019-11-15 11:29:19.428 7fd852678700  0 SIGN: MSG 1   
>> sig_check:12754808375040063976
>> 2019-11-15 11:29:19.428 7fd852678700  0 Signature failed.
>> 2019-11-15 11:29:19.428 7fd852678700  0 --1- 10.97.206.91:0/2841811017
>>>> v1:10.97.206.97:6884/265976 conn(0x7fd834090770 0x7fd83408c190 :-1
>> s=READ_FOOTER_AND_DISPATCH pgs=42070 cs=1 l=1).handle_message_footer
>> Signature check failed
>>
>> I allready stopped and started all MON services w/o success.
>>
>> This is the output of config dump:
>> root@ld3955:~# ceph config-key dump | grep config/
>>     "config/mgr/mgr/balancer/active": "true",
>>     "config/mgr/mgr/balancer/mode": "upmap",
>>     "config/mgr/mgr/balancer/pool_ids": "11,59,60,61",
>>     "config/mgr/mgr/balancer/upmap_max_iterations": "2",
>>     "config/mgr/mgr/dashboard/url_prefix": "dashboard",
>>     "config/mgr/mgr/devicehealth/enable_monitoring": "false",
>>     "config/mgr/target_max_misplaced_ratio": "0.010000",
>>
>> And this is the config of an OSD showing that cephx authentication is
>> activated:
>> root@ld3955:~# ceph config show osd.0
>> NAME                             VALUE                                 
>> SOURCE   OVERRIDES IGNORES
>> auth_client_required             cephx                                  file
>> auth_cluster_required            cephx                                  file
>> auth_service_required            cephx                                  file
>> bluestore_block_db_size          10737418240                            file
>> cephx_cluster_require_signatures false                                  file
>> cephx_require_signatures         false                                  file
>> cephx_sign_messages              false                                  file
>> cluster_network                  192.168.1.0/27                         file
>> daemonize                        false                                 
>> override
>> debug_ms                         0/0                                    file
>> keyring                          $osd_data/keyring                     
>> default
>> leveldb_log                                                            
>> default
>> mon_allow_pool_delete            true                                   file
>> mon_host                         10.97.206.93 10.97.206.94 10.97.206.95 file
>> mon_osd_full_ratio               0.850000                               file
>> mon_osd_nearfull_ratio           0.750000                               file
>> osd_crush_update_on_start        false                                  file
>> osd_deep_scrub_interval          1209600.000000                         file
>> osd_journal_size                 1024                                   file
>> osd_max_backfills                2                                      file
>> osd_op_queue                     wpq                                    file
>> osd_op_queue_cut_off             high                                   file
>> osd_pool_default_min_size        2                                      file
>> osd_pool_default_size            3                                      file
>> osd_scrub_begin_hour             21                                     file
>> osd_scrub_end_hour               8                                      file
>> osd_scrub_sleep                  0.100000                               file
>> public_network                   10.97.206.0/24                         file
>> rbd_default_features             61                                    
>> default
>> setgroup                         ceph                                  
>> cmdline
>> setuser                          ceph                                  
>> cmdline
>>
>> How can I fix this error?
>>
>> THX
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux