On Thu, Jan 23, 2014 at 8:36 PM, David Francheski (dfranche) <dfranche@xxxxxxxxx> wrote: > Thanks Yehuda, > > I've attached both the apache2 access/error logs, as well as the radosgw > log file. > It doesn't look like /var/www/s3gw.fcgi is even being called. > I put a "touch /tmp/radosgw-started-flag" command in /var/www/s3gw.fcgi > for debug purposes; > I don't see the /tmp/radosgw-started-flag file after the 405 error is > returned. > > I also placed "rgw debug = 20" into /etc/ceph.conf > (your suggestion was "debug rgw = 20" which didn't look quite right ). While it doesn't look right, it is the actual configurable. It doesn't look like your apache setup is correct, there's a good chance that you have another site configured (maybe the default apache one) and everything is just getting sent there. Yehuda > > Thanks again for your help > I really appreciate it ! > > -David > > > On 1/23/14 9:31 AM, "Yehuda Sadeh" <yehuda@xxxxxxxxxxx> wrote: > >>On Thu, Jan 23, 2014 at 8:24 AM, David Francheski (dfranche) >><dfranche@xxxxxxxxx> wrote: >>> Hi, >>> >>> I'm using the latest Emperor Ceph release, and trying to bring up the S3 >>> Object Gateway. >>> I have a Ceph cluster deployed on an Ubuntu 13.10 based distribution. >>> >>> When I attempt to create a S3 bucket using the "boto" python module, I >>>get >>> the following error: >>> >>> Boto.exception.S3ResponseError: S3ResponseError: 405 Method Not >>>Allowed >>> >>> >>> (This translates into a PUT request on the apache2 server itself >>>running on >>> the gateway) >>> >>> >>> I'm using the following python script from my client: >>> >>> #!/usr/bin/python >>> >>> import boto >>> import boto.s3.connection >>> access_key = 'RATATZG7WCGGD9915ODH' >>> secret_key = 'iTiKndE0oXH239BxuVPWGiuwZim7vrP2snQ01YeN' >>> >>> # Connect to S3 Ceph gateway >>> conn = boto.connect_s3( >>> aws_access_key_id = access_key, >>> aws_secret_access_key = secret_key, >>> host = '35.4.6.150', >>> is_secure=False, # uncomment if you are not using >>>ssl >>> calling_format = boto.s3.connection.OrdinaryCallingFormat(), >>> ) >>> >>> # Print connection info >>> print conn >>> >>> # Create a S3 bucket >>> bucket = conn.create_bucket('s3-ceph-bucket') >>> >>> >>> >>> Also, I'm using the following /etc/apache2/sites-available/rgw.conf >>>file on >>> the S3 object gateway: >>> >>> <IfModule mod_fastcgi.c> >>> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock >>> </IfModule> >>> >>> <VirtualHost *:80> >>> ServerName radosgw.mos.com >>> ServerAdmin rgw.mos.com >>> DocumentRoot /var/www >>> <IfModule mod_fastcgi.c> >>> <Directory /var/www> >>> Options +ExecCGI >>> AllowOverride All >>> SetHandler fastcgi-script >>> Order allow,deny >>> Allow from all >>> AuthBasicAuthoritative Off >>> </Directory> >>> </IfModule> >>> <IfModule mod_rewrite.c> >>> RewriteEngine On >>> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) >>> /s3gw.fcgi?page=$1¶ms=$2&%{QUER >>> Y_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] >>> </IfModule> >>> AllowEncodedSlashes On >>> ErrorLog /var/log/apache2/error.log >>> CustomLog /var/log/apache2/access.log combined >>> ServerSignature Off >>> </VirtualHost> >>> >> >>That sounds like an issue with subdomain bucket names misconfigured, >>but it shouldn't be a problem with the ordinary calling format you >>specified up there. My second guess would be a broken rewrite rule, >>although at first glance I can't really see anything wrong with the >>one you have. Can you set 'debug rgw = 20' and provide log for the >>failing operation? >> >>Yehuda > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com