Re: Ceph / RadosGW deployment questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 24, 2013 at 12:46 AM, Guang <yguang11@xxxxxxxxx> wrote:
> Hi ceph-users,
> I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
> RHEL6.4, during the deployment, I have a couple of questions which need your
> help.
>
> 1. I followed the steps http://ceph.com/docs/master/install/rpm/ to deploy
> the RadosGW node, however, after the deployment, all requests failed with
> 500 returned. With some hints from
> http://irclogs.ceph.widodh.nl/index.php?date=2013-01-25, I changed the
> FastCgiExternalServer to FastCgiServer within rgw.conf. Is this change valid
> or I missed somewhere else which leads the need for this change?

In theory you could have either, however, the preferred mode of
installation is with having FastCgiExternalServer and manually running
the radosgw.

>
> 2. It still does not work and the httpd has the following error log:
>     [Mon Sep 23 07:34:32 2013] [crit] (98)Address already in use: FastCGI:
> can't create server "/var/www/s3gw.fcgi": bind() failed [/tmp/radosgw.sock]
> which indicates that radosgw is not started properly, so that I manually run
> "radosgw --rgw-socket-path=/tmp/radosgw.sock -c /etc/ceph/ceph.conf -n
> client.radosgw.gateway" to start a radosgw daemon and then the gateway
> starts working as expected.
> Did I miss anything this part?

That's one way to run the radosgw process. You still want to change
the apache conf to use the external server configuration, otherwise
apache will try relaunch it.

>
> 3. When I was trying to run ceph admin-daemon command on the radosGW host,
> it failed because it does not have the corresponding  asok file, however, I
> am able to run the command on monitor host and found that the radosGW's
> information can be retrieved there.
>
> @monitor (monitor and gateway are deployed on different hosts).
> [xxx@startbart ceph]$ sudo ceph --admin-daemon
> /var/run/ceph/ceph-mon.startbart.asok config show | grep rgw
>   "rgw": "1\/5",
>   "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-startbart",
>   "rgw_enable_apis": "s3, swift, swift_auth, admin",
>   "rgw_cache_enabled": "true",
>   "rgw_cache_lru_size": "10000",
>   "rgw_socket_path": "",
>   "rgw_host": "",
>   "rgw_port": "",
>   "rgw_dns_name": "",
>   "rgw_script_uri": "",
>   "rgw_request_uri": "",
>   "rgw_swift_url": "",
>   "rgw_swift_url_prefix": "swift",
>   "rgw_swift_auth_url": "",
>   "rgw_swift_auth_entry": "auth",
>   "rgw_keystone_url": "",
>   "rgw_keystone_admin_token": "",
>   "rgw_keystone_accepted_roles": "Member, admin",
>   "rgw_keystone_token_cache_size": "10000",
>   "rgw_keystone_revocation_interval": "900",
>   "rgw_admin_entry": "admin",
>   "rgw_enforce_swift_acls": "true",
>   "rgw_swift_token_expiration": "86400",
>   "rgw_print_continue": "true",
>   "rgw_remote_addr_param": "REMOTE_ADDR",
>   "rgw_op_thread_timeout": "600",
>   "rgw_op_thread_suicide_timeout": "0",
>   "rgw_thread_pool_size": "100",
> Is this expected?

The ceph configuration is monolithic, you see the mon configuration
here, and there are some rgw defaults, but it doesn't reflect the
actual rgw configuration. There's an open issue for gateway not
creating the admin socket by default, try adding 'admin socket' config
line to your gateway ceph.conf.

>
> 4. cephx authentication. After reading through the cephx introduction, I got
> the feeling that cephx is for client to cluster authentication, so that each
> librados user will need to create a new key. However, this page
> http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx
> got me confused in terms of why should we create keys for mon and osd? And
> how does that fit into the authentication diagram? BTW, I found the keyrings
> under /var/lib/cecph/{role}/ for each roles, are they being used when talk
> to other roles?
>

cephx is a kerberos-like authentication, and each entity needs to have
a key. In a distributed system like ceph, there's no single 'server'
(as in client-server). There could be many thousands of such servers,
and we want to authenticate each and one of them. That been said, when
a client gets a service ticket, it gets it for all services of the
same type, so it doesn't need to acquire a new ticket for each osd it
connects to.


Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux