radosgw fails to start, leaves no clues why

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all-

 

Trying to set up object storage on CentOS.  I’ve done this successfully on Ubuntu but I’m having some trouble on CentOS.  I think I have everything configured but when I try to start the radosgw service it reports starting, but then the status is not running, with no helpful output as to why on the console or in the radosgw log.  I once experienced a similar problem in Ubuntu when the hostname was incorrect in ceph.conf but that doesn’t seem to be the issue here.  Not sure where to go next.  Any suggestions what could be the problem?  Thanks!

 

[ceph@joceph08 ceph]$ sudo service httpd restart

Stopping httpd:                                            [  OK  ]

Starting httpd:                                            [  OK  ]

 

[ceph@joceph08 ceph]$ cat ceph.conf

[joceph08.radosgw.gateway]

keyring = /etc/ceph/keyring.radosgw.gateway

rgw_dns_name = joceph08

host = joceph08

log_file = /var/log/ceph/radosgw.log

rgw_socket_path = /tmp/radosgw.sock

[global]

filestore_xattr_use_omap = true

mon_host = 10.23.37.142,10.23.37.145,10.23.37.161

osd_journal_size = 1024

mon_initial_members = joceph01, joceph02, joceph03

auth_supported = cephx

fsid = 721ea513-e84c-48df-9c8f-f1d9e602b810

 

[ceph@joceph08 ceph]$ sudo service ceph-radosgw start

Starting radosgw instance(s)...

 

[ceph@joceph08 ceph]$ sudo service ceph-radosgw status

/usr/bin/radosgw is not running.

 

[ceph@joceph08 ceph]$ sudo cat /var/log/ceph/radosgw.log

[ceph@joceph08 ceph]$

 

[ceph@joceph08 ceph]$ sudo cat /etc/ceph/keyring.radosgw.gateway

[client.radosgw.gateway]

        key = AQDbUnFSIGT2BxAA5rz9I1HHIG/LJx+XCYot1w==

        caps mon = "allow rw"

        caps osd = "allow rwx"

 

[ceph@joceph08 ceph]$ ceph status

  cluster 721ea513-e84c-48df-9c8f-f1d9e602b810

   health HEALTH_OK

   monmap e1: 3 mons at {joceph01=10.23.37.142:6789/0,joceph02=10.23.37.145:6789/0,joceph03=10.23.37.161:6789/0}, election epoch 8, quorum 0,1,2 joceph01,joceph02,joceph03

   osdmap e119: 16 osds: 16 up, 16 in

    pgmap v1383: 3200 pgs: 3200 active+clean; 219 GB data, 411 GB used, 10760 GB / 11172 GB avail

   mdsmap e1: 0/0/1 up

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux