Re: Radosgw: upgrade Firefly to Hammer, impossible to create bucket

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Things you can check 

* Is RGW node able to resolve bucket-2.ostore.athome.priv  , try ping bucket-2.ostore.athome.priv
* Is # s3cmd ls working or throwing errors ?

l
Are you sure the below entries are correct ? Generally host_base and host_bucket should point to RGW FQDN in your case ceph-radosgw1 FQDN . 
ostore.athome.priv looks like a different host to me.

host_base->ostore.athome.priv
host_bucket->%(bucket)s.ostore.athome.priv


****************************************************************
Karan Singh 
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9 4572302
http://www.csc.fi/
****************************************************************

On 13 Apr 2015, at 06:47, Francois Lafont <flafdivers@xxxxxxx> wrote:

Hi,

On a testing cluster, I have a radosgw on Firefly and the other
nodes, OSDs and monitors, are on Hammer. The nodes are installed
with puppet in personal VM, so I can reproduce the problem.
Generally, I use s3cmd to check the radosgw. While radosgw is on
Firefly, I can create bucket, no problem. Then, I upgrade the
radosgw (it's a Ubuntu Trusty):

   sed -i 's/firefly/hammer/g' /etc/apt/sources.list.d/ceph.list
   apt-get update && apt-get dist-upgrade -y
   service stop apache2
   stop radosgw-all
   start radosgw-all
   service apache2 start

After that, impossible to create a bucket with s3cmd:

--------------------------------------------------
~# s3cmd -d mb s3://bucket-2
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->ostore.athome.priv
DEBUG: ConfigParser: access_key->5R...17_chars...Y
DEBUG: ConfigParser: secret_key->Ij...37_chars...I
DEBUG: ConfigParser: host_bucket->%(bucket)s.ostore.athome.priv
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://bucket-2' using UTF-8
DEBUG: Command: mb
DEBUG: SignHeaders: 'PUT\n\n\n\nx-amz-date:Mon, 13 Apr 2015 03:32:23 +0000\n/bucket-2/'
DEBUG: CreateRequest: resource[uri]=/
DEBUG: SignHeaders: 'PUT\n\n\n\nx-amz-date:Mon, 13 Apr 2015 03:32:23 +0000\n/bucket-2/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(bucket-2): bucket-2.ostore.athome.priv
DEBUG: format_uri(): /
DEBUG: Sending request method_string='PUT', uri='/', headers={'content-length': '0', 'Authorization': 'AWS 5RUS0Z3SBG6IK263PLFY:3V1MdXoCGFrJKrO2LSJaBpNMcK4=', 'x-amz-date': 'Mon, 13 Apr 2015 03:32:23 +0000'}, body=(0 bytes)
DEBUG: Response: {'status': 405, 'headers': {'date': 'Mon, 13 Apr 2015 03:32:23 GMT', 'accept-ranges': 'bytes', 'content-type': 'application/xml', 'content-length': '82', 'server': 'Apache/2.4.7 (Ubuntu)'}, 'reason': 'Method Not Allowed', 'data': '<?xml version="1.0" encoding="UTF-8"?><Error><Code>MethodNotAllowed</Code></Error>'}
DEBUG: S3Error: 405 (Method Not Allowed)
DEBUG: HttpHeader: date: Mon, 13 Apr 2015 03:32:23 GMT
DEBUG: HttpHeader: accept-ranges: bytes
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: content-length: 82
DEBUG: HttpHeader: server: Apache/2.4.7 (Ubuntu)
DEBUG: ErrorXML: Code: 'MethodNotAllowed'
ERROR: S3 error: 405 (MethodNotAllowed):
--------------------------------------------------

But before the upgrade, the same command worked fine.
I see nothing in the log. Here is my ceph.conf:

--------------------------------------------------
[global]
 auth client required      = cephx
 auth cluster required     = cephx
 auth service required     = cephx
 cluster network           = 10.0.0.0/24
 filestore xattr use omap  = true
 fsid                      = e865b3d0-534a-4f28-9883-2793079d400b
 osd client op priority    = 63
 osd crush chooseleaf type = 1
 osd journal size          = 0
 osd max backfills         = 1
 osd op threads            = 4
 osd pool default min size = 1
 osd pool default pg num   = 64
 osd pool default pgp num  = 64
 osd pool default size     = 2
 osd recovery max active   = 1
 osd recovery op priority  = 1
 public network            = 172.31.0.0/16

[mon.1]
 host     = ceph-node1
 mon addr = 172.31.10.1

[mon.2]
 host     = ceph-node2
 mon addr = 172.31.10.2

[mon.3]
 host     = ceph-node3
 mon addr = 172.31.10.3

[client.radosgw.gw1]
 host            = ceph-radosgw1
 rgw dns name    = ostore
 rgw socket path = /var/run/ceph/ceph.radosgw.gw1.fastcgi.sock
 keyring         = /etc/ceph/ceph.client.radosgw.gw1.keyring
 log file        = /var/log/radosgw/client.radosgw.gw1.log
--------------------------------------------------

My DNS seems to me well configured, 172.31.10.6 is the IP address
of the radosgw (its hostname is ceph-radosgw1):

   ~# dig +short ostore.athome.priv
   172.31.10.6
   ~# dig +short foo.ostore.athome.priv
   172.31.10.6
   ~# dig +short bar.ostore.athome.priv
   172.31.10.6

Did I miss something?

I can provide some logs if necessary. In /var/log/radosgw/client.radosgw.gw1.log,
I have just 2 lines during the s3cmd command:

2015-04-13 05:32:23.282011 7f0707f5f700  1 ====== starting new request req=0x7f0778015a10 =====
2015-04-13 05:32:23.282081 7f0707f5f700  1 ====== req done req=0x7f0778015a10 http_status=405 ======

I can provide a log in debug mode but I don't know which value of N
I should choose in the command below:

   radosgw --cluster=ceph --id radosgw.gw1 --debug_ms N

Thx for your help.

--
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux