Re: "protocol feature mismatch" after upgrading to Hammer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you enable the straw2 stuff? CRUSHV4 shouldn't be required by the
cluster unless you made changes to the layout requiring it.

If you did, the clients have to be upgraded to understand it. You
could disable all the v4 features; that should let them connect again.
-Greg

On Thu, Apr 9, 2015 at 7:07 AM, Kyle Hutson <kylehutson@xxxxxxx> wrote:
> This particular problem I just figured out myself ('ceph -w' was still
> running from before the upgrade, and ctrl-c and restarting solved that
> issue), but I'm still having a similar problem on the ceph client:
>
> libceph: mon19 10.5.38.20:6789 feature set mismatch, my 2b84a042aca <
> server's 102b84a042aca, missing 1000000000000
>
> It appears that even the latest kernel doesn't have support for
> CEPH_FEATURE_CRUSH_V4
>
> How do I make my ceph cluster backward-compatible with the old cephfs
> client?
>
> On Thu, Apr 9, 2015 at 8:58 AM, Kyle Hutson <kylehutson@xxxxxxx> wrote:
>>
>> I upgraded from giant to hammer yesterday and now 'ceph -w' is constantly
>> repeating this message:
>>
>> 2015-04-09 08:50:26.318042 7f95dbf86700  0 -- 10.5.38.1:0/2037478 >>
>> 10.5.38.1:6789/0 pipe(0x7f95e00256e0 sd=3 :39489 s=1 pgs=0 cs=0 l=1
>> c=0x7f95e0023670).connect protocol feature mismatch, my 3fffffffffff < peer
>> 13fffffffffff missing 1000000000000
>>
>> It isn't always the same IP for the destination - here's another:
>> 2015-04-09 08:50:20.322059 7f95dc087700  0 -- 10.5.38.1:0/2037478 >>
>> 10.5.38.8:6789/0 pipe(0x7f95e00262f0 sd=3 :54047 s=1 pgs=0 cs=0 l=1
>> c=0x7f95e002b480).connect protocol feature mismatch, my 3fffffffffff < peer
>> 13fffffffffff missing 1000000000000
>>
>> Some details about our install:
>> We have 24 hosts with 18 OSDs each. 16 per host are spinning disks in an
>> erasure coded pool (k=8 m=4). 2 OSDs per host are SSD partitions used for a
>> caching tier in front of the EC pool. All 24 hosts are monitors. 4 hosts are
>> mds. We are running cephfs with a client trying to write data over cephfs
>> when we're seeing these messages.
>>
>> Any ideas?
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux