Re: ceph authenticate problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the start/stop script will
; verify that it matches the hostname (or else ignore it).  If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).

; global
[global]
	; enable secure authentication
	auth supported = cephx
        keyring = /usr/local/etc/ceph/keyring.bin


        ; allow ourselves to open a lot of files
        max open files = 131072

        ; set up logging
        log file = /var/log/ceph/$name.log

        ; set up pid files
        pid file = /var/run/ceph/$name.pid

; monitors
;  You need at least one.  You need at least three if you want to
;  tolerate any node failures.  Always create an odd number.
[mon]
	mon data = /data/mon$id

	; logging, for debugging monitor crashes, in order of
	; their likelihood of being helpful :)
	;debug ms = 1
	;debug mon = 20
	;debug paxos = 20
	;debug auth = 20

[mon.0]
	host = ceph_mon0
	mon addr = 192.168.0.211:6789

; mds
;  You need at least one.  Define two to get a standby.
[mds]
	; where the mds keeps it's secret encryption keys
	keyring = /usr/local/etc/ceph/keyring.$name

	; mds logging to debug issues.
	;debug ms = 1
	;debug mds = 20

[mds.alpha]
	host = ceph_mds0

; osd
;  You need at least one.  Two if you want data to be replicated.
;  Define as many as you like.
[osd]
	; This is where the btrfs volume will be mounted.
	osd data = /data/osd$id
        keyring = /usr/local/etc/ceph/keyring.$name


	; Ideally, make this a separate disk or partition.  A few
 	; hundred MB should be enough; more if you have fast or many
 	; disks.  You can use a file under the osd data dir if need be
 	; (e.g. /data/osd$id/journal), but it will be slower than a
 	; separate disk or partition.

        ; This is an example of a file-based journal.
	osd journal = /data/osd$id/journal
	osd journal size = 1000 ; journal size, in megabytes

	; osd logging to debug osd issues, in order of likelihood of being
	; helpful
	;debug ms = 1
	;debug osd = 20
	;debug filestore = 20
	;debug journal = 20

[osd.0]
	host = ceph_osd0

	; if 'btrfs devs' is not specified, you're responsible for
	; setting up the 'osd data' dir.  if it is not btrfs, things
	; will behave up until you try to recover from a crash (which
	; usually fine for basic testing).
	btrfs devs = /dev/sda7


[osd.1]
	host = ceph_osd1
	btrfs devs = /dev/sda7




steps:
/etc/init.d/ceph -a stop
mkcephfs -c /usr/local/etc/ceph/ceph.conf --allhosts --mkbtrfs -k
/usr/local/etc/ceph/keyring.bin
/etc/init.d/ceph -a start

I start command is correct?
Thanks!

在 2011年5月24日 下午12:51,huang jun <hjwsm1989@xxxxxxxxx> 写道:
>  hi
>  there are two [global] sections in your ceph.conf
>  i do not know why you didn't have admin_keyring.bin in you /data/mon0,
> maybe you should restart the whole cluster after you assured configure
> file is right
>
> 在 2011年5月23日 下午3:48,biyan chen <riby.chen@xxxxxxxxx> 写道:
>> ;
>> ; Sample ceph ceph.conf file.
>> ;
>> ; This file defines cluster membership, the various locations
>> ; that Ceph stores data, and any other runtime options.
>>
>> ; If a 'host' is defined for a daemon, the start/stop script will
>> ; verify that it matches the hostname (or else ignore it).  If it is
>> ; not defined, it is assumed that the daemon is intended to start on
>> ; the current host (e.g., in a setup with a startup.conf on each
>> ; node).
>>
>> ; global
>> [global]
>>        ; enable secure authentication
>>        auth supported = cephx
>>        keyring = /use/local/etc/ceph/keyring.bin
>>
>>
>>        ; allow ourselves to open a lot of files
>>        max open files = 131072
>>
>>        ; set up logging
>> "/usr/local/etc/ceph/ceph.conf" 98L, 2601C
>> ;
>> ; Sample ceph ceph.conf file.
>> ;
>> ; This file defines cluster membership, the various locations
>> ; that Ceph stores data, and any other runtime options.
>>
>> ; If a 'host' is defined for a daemon, the start/stop script will
>> ; verify that it matches the hostname (or else ignore it).  If it is
>> ; not defined, it is assumed that the daemon is intended to start on
>> ; the current host (e.g., in a setup with a startup.conf on each
>> ; node).
>>
>> ; global
>> [global]
>>        ; enable secure authentication
>>        auth supported = cephx
>>        keyring = /use/local/etc/ceph/keyring.bin
>>
>>
>>        ; allow ourselves to open a lot of files
>>        max open files = 131072
>>
>>        ; set up logging
>>        log file = /var/log/ceph/$name.log
>>
>>        ; set up pid files
>>        pid file = /var/run/ceph/$name.pid
>>
>> ; monitors
>> ;  You need at least one.  You need at least three if you want to
>> ;  tolerate any node failures.  Always create an odd number.
>> [mon]
>>        mon data = /data/mon$id
>>
>>        ; logging, for debugging monitor crashes, in order of
>>        ; their likelihood of being helpful :)
>>        ;debug ms = 1
>>        ;debug mon = 20
>>        ;debug paxos = 20
>>        ;debug auth = 20
>>
>> [mon.0]
>>        host = ceph_mon0
>>        mon addr = 192.168.0.211:6789
>>
>> ; mds
>> ;  You need at least one.  Define two to get a standby.
>> [mds]
>>        ; where the mds keeps it's secret encryption keys
>>        keyring = /usr/local/etc/ceph/keyring.$name
>>
>>        ; mds logging to debug issues.
>>        ;debug ms = 1
>>        ;debug mds = 20
>>
>> [mds.alpha]
>>        host = ceph_mds0
>>
>> ; osd
>> ;  You need at least one.  Two if you want data to be replicated.
>> ;  Define as many as you like.
>> [osd]
>>        ; This is where the btrfs volume will be mounted.
>>        osd data = /data/osd$id
>>        keyring = /usr/local/etc/ceph/keyring.$name
>>
>>
>>        ; Ideally, make this a separate disk or partition.  A few
>>        ; hundred MB should be enough; more if you have fast or many
>>        ; disks.  You can use a file under the osd data dir if need be
>>        ; (e.g. /data/osd$id/journal), but it will be slower than a
>>        ; separate disk or partition.
>>
>>        ; This is an example of a file-based journal.
>>        osd journal = /data/osd$id/journal
>>        osd journal size = 1000 ; journal size, in megabytes
>>
>>        ; osd logging to debug osd issues, in order of likelihood of being
>>        ; helpful
>>        ;debug ms = 1
>>        ;debug osd = 20
>>        ;debug filestore = 20
>>        ;debug journal = 20
>>
>> [osd.0]
>>        host = ceph_osd0
>>
>>        ; if 'btrfs devs' is not specified, you're responsible for
>>        ; setting up the 'osd data' dir.  if it is not btrfs, things
>>        ; will behave up until you try to recover from a crash (which
>>        ; usually fine for basic testing).
>>        btrfs devs = /dev/sda7
>>
>>
>> [osd.1]
>>        host = ceph_osd1
>>        btrfs devs = /dev/sda7
>>
>>
>>
>> log:
>> [root@ceph_mon0 ~]# ceph mon stat -c /usr/local/etc/ceph/ceph.conf
>> 2011-05-23 11:44:47.078662 7fb266bf8720 unable to authenticate as client.admin
>> 2011-05-23 11:44:47.079037 7fb266bf8720 ceph_tool_common_init failed.
>>
>> [root@ceph_mds0 ~]# tail /var/log/ceph/mds.alpha.log -f
>> 2011-05-23 23:44:53.464262 7fb0ce956710 -- 192.168.0.207:6800/29302 >>
>> 192.168.0.211:6789/0 pipe(0x26c1a00 sd=5 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:44:56.464402 7fb0ce855710 -- 192.168.0.207:6800/29302 >>
>> 192.168.0.211:6789/0 pipe(0x26d3280 sd=5 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:44:59.464492 7fb0ce956710 -- 192.168.0.207:6800/29302 >>
>> 192.168.0.211:6789/0 pipe(0x26c3c80 sd=5 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:45:02.464787 7fb0ce855710 -- 192.168.0.207:6800/29302 >>
>> 192.168.0.211:6789/0 pipe(0x26cec80 sd=6 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:45:05.464941 7fb0cfd58710 mds-1.0 ms_handle_connect on
>> 192.168.0.211:6789/0
>> 2011-05-23 23:45:05.465405 7fb0cfd58710 cannot convert AES key for NSS: -8023
>> 2011-05-23 23:45:05.465946 7fb0cfd58710 cannot convert AES key for NSS: -8023
>> 2011-05-23 23:45:05.465969 7fb0cfd58710 error from decrypt -22
>> 2011-05-23 23:45:05.465998 7fb0cfd58710 cephx:
>> verify_service_ticket_reply failed decode_decrypt with secret
>> AQB4R9pNEJ/lFRAAdsRa+EFMA00acC28x9Fj7A==
>> 2011-05-23 23:45:05.466010 7fb0cfd58710 cephx client: could not verify
>> service_ticket reply
>>
>> [root@ceph_osd0 ~]# tail /var/log/ceph/osd.0.log -f
>> 2011-05-23 23:43:45.583183 7f5b2f383710 -- 192.168.0.208:6800/30745 >>
>> 192.168.0.211:6789/0 pipe(0xf79280 sd=12 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:43:48.583357 7f5b32689710 -- 192.168.0.208:6800/30745 >>
>> 192.168.0.211:6789/0 pipe(0xf79000 sd=12 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:43:51.583412 7f5b2f383710 -- 192.168.0.208:6800/30745 >>
>> 192.168.0.211:6789/0 pipe(0xf79280 sd=13 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:43:54.583548 7f5b32689710 -- 192.168.0.208:6800/30745 >>
>> 192.168.0.211:6789/0 pipe(0xf79000 sd=12 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:43:57.583842 7f5b2f383710 -- 192.168.0.208:6800/30745 >>
>> 192.168.0.211:6789/0 pipe(0xf79280 sd=12 pgs=0 cs=0 l=0).fault first
>> fault
>> 2011-05-23 23:44:00.584629 7f5b34f8e710 cannot convert AES key for NSS: -8023
>> 2011-05-23 23:44:00.585262 7f5b34f8e710 cannot convert AES key for NSS: -8023
>> 2011-05-23 23:44:00.585290 7f5b34f8e710 error from decrypt -22
>> 2011-05-23 23:44:00.585312 7f5b34f8e710 cephx:
>> verify_service_ticket_reply failed decode_decrypt with secret
>> AQAqR9pNUNJ2IBAAp16KszdsZHCwEf5IOcoSdw==
>> 2011-05-23 23:44:00.585322 7f5b34f8e710 cephx client: could not verify
>> service_ticket reply
>>
>> Do we have such problems? If you give me some help,
>>
>> thank you!
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>



-- 
name:Riby
mobile:+86 15280267642
company: 百大龙一
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux