Re: new ceph-0.60 always crash mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 12, 2013 at 5:19 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote:
> 2013/4/13 Gregory Farnum <greg@xxxxxxxxxxx>:
>> Have you made any changes from the default settings? That assert
>
> Do you means one of these settings?
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> cephx require signatures = true
> cephx cluster require signatures = true
> cephx service require signatures = true
> cephx sign messages = true
> keyring = /etc/ceph/keyring

These all look fine.

>
> yes, these was my testing configs. And the command created the cluster was:
> mkcephfs -v -a -c /etc/ceph/ceph.conf --mkfs
>
> I thought it was permission problem, so I grabbed just all keys from
> 'ceph auth list' into /etc/ceph/keyring, mds still crashs.
>
> I wondering which point I should check for MDS's permission? Is it
> just 'ceph auth list' ?

Yeah. Mine has pretty broad permissions:
mds.a
        key: AQC0omhRaPzNLBAAz5J/JdZ6LrJo/DNG0ns8xg==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *

but I wasn't sure if yours were more constricted on pool access or something.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


>
>> indicates the MDS couldn't load some data that it needed; the two most
>> likely things are that the data doesn't exist or that the MDS isn't
>> allowed to access it due to some permissions problem on the pool in
>> question.
>
>>
>> On Thu, Apr 11, 2013 at 9:01 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote:
>>> My newly created ceph cluster in ceph-0.60 always crash mds, the last messages:
>>>
>>>      0> 2013-04-12 11:48:53.034644 7f691cf64700 -1 mds/MDSTable.cc: In
>>> function 'void MDSTable::load_2(int, ceph::bufferlist&, Context*)'
>>> thread 7f691cf64700 time 2013-04-12 11:48:53.033488
>>> mds/MDSTable.cc: 150: FAILED assert(0)
>>>
>>> I thought it was cephx problem, but after disable cephx and restart
>>> whole cluster, it's still that way.
>>>
>>> auth cluster required = none
>>> auth service required = none
>>> auth client required = none
>>>
>>> Previous setting:
>>> ;auth cluster required = cephx
>>> ;auth service required = cephx
>>> ;auth client required = cephx
>>> ;cephx require signatures = true
>>> ;cephx cluster require signatures = true
>>> ;cephx service require signatures = true
>>> ;cephx sign messages = true
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux