Re: radosgw-agent, sync "zone_info.us-east": Http error code 500 content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks,I copy region_map,zone_info.us-west and zone_info.us-east in pool .us-east.rgw.root 
to .us.rgw.root in host ceph0.also do it in the host rceph0.
then delete region_map,zone_info.us-west and zone_info.us-east from pool .us-east.rgw.root and .us-west.rgw.root 
I changed the parameter of "rgw zone root pool" to ".us.rgw.root" in ceph.conf in both of host ceph0 and rceph0.

root@rceph0:~# radosgw-admin zone get --rgw-zone=us-west --name client.radosgw.us-west-1 
{ "domain_root": ".us-west.rgw.root", 
"control_pool": ".us-west.rgw.control", 
"gc_pool": ".us-west.rgw.gc", 
"log_pool": ".us-west.log", 
"intent_log_pool": ".us-west.intent-log", 
"usage_log_pool": ".us-west.usage", 
"user_keys_pool": ".us-west.users", 
"user_email_pool": ".us-west.users.email", 
"user_swift_pool": ".us-west.users.swift", 
"user_uid_pool": ".us-west.users.uid", 
"system_key": { "access_key": "G5DLUXD2HA07LDT10DRU", 
"secret_key": "IPgisy2fW7WOX1xFqjtdPFR6fXPfupfDHEM4n4+H"}, 
"placement_pools": [ 
{ "key": "default-placement", 
"val": { "index_pool": ".us-west.rgw.buckets.index", 
"data_pool": ".us-west.rgw.buckets"}}]}
root@rceph0:~# cat /etc/ceph/ceph.conf |grep rgw 
rgw region = us 
rgw region root pool = .us.rgw.root 
rgw zone = us-west 
rgw zone root pool = .us.rgw.root 
rgw socket path = /var/run/ceph/client.radosgw.us-west-1.sock

when I run "radosgw-agent -c /etc/ceph/region-data-sync.conf --sync-scope full",it seems right. 
but when I run "radosgw-agent -v -c /etc/ceph/region-data-sync.conf" to see detail log,the object is still not sync.the error I found in log is below:

Tue, 17 Dec 2013 06:50:57 GMT 
/admin/log 
2013-12-17T14:50:57.880 15868:DEBUG:boto:Signature: 
AWS V3FQ7M8LP260PH6LPD64:2DMFGABT8d/eLU22qX7wBiy0fqY= 
2013-12-17T14:50:57.880 15868:DEBUG:boto:url = "" href="http://ceph0.gcis3test.com/admin/log">http://ceph0.gcis3test.com/admin/log'; 
params={'marker': '', 'type': 'bucket-index', 'bucket-instance': u'test131217:us-east.6106.1', 'max-entries': None} 
headers={'Date': 'Tue, 17 Dec 2013 06:50:57 GMT', 'Content-Length': '0', 'Authorization': 'AWS V3FQ7M8LP260PH6LPD64:2DMFGABT8d/eLU22qX7wBiy0fqY=', 'User-Agent': 'Boto/2.16.0 Py 
thon/2.7.3 Linux/3.8.0-29-generic'} 
data="">
2013-12-17T14:50:57.883 15868:INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ceph0.gcis3test.com 
2013-12-17T14:50:57.890 15868:DEBUG:requests.packages.urllib3.connectionpool:"GET /admin/log?marker=&type=bucket-index&bucket-instance=test131217%3Aus-east.6106.1 HTTP/1.1" 200 
None 
2013-12-17T14:50:57.892 15868:INFO:radosgw_agent.worker:bucket instance "test131217:us-east.6106.1" has 4 entries after "" 
2013-12-17T14:50:57.892 15868:INFO:radosgw_agent.worker:syncing bucket "test131217" 
2013-12-17T14:50:57.893 15868:DEBUG:radosgw_agent.worker:syncing object "test131217/00Monitor_Event.jpg" 
2013-12-17T14:50:57.893 15868:DEBUG:radosgw_agent.worker:sync_object test131217/00Monitor_Event.jpg 
2013-12-17T14:50:57.894 15868:DEBUG:boto:StringToSign: 
PUT

application/json; charset=UTF-8 
Tue, 17 Dec 2013 06:50:57 GMT 
x-amz-copy-source:test131217%2F00Monitor_Event.jpg 
/test131217/00Monitor_Event.jpg 
2013-12-17T14:50:57.895 15868:DEBUG:boto:Signature: 
AWS G5DLUXD2HA07LDT10DRU:vph+BQC7y84LpkUP2khBVPvCtm4= 
2013-12-17T14:50:57.895 15868:DEBUG:boto:url = "" href="http://rceph0.gcis3test.com/test131217/00Monitor_Event.jpg">http://rceph0.gcis3test.com/test131217/00Monitor_Event.jpg'; 
params={'rgwx-op-id': 'rceph0:15649:1', 'rgwx-source-zone': u'us-east', 'rgwx-client-id': 'radosgw-agent'} 
headers={'Content-Length': '0', 'User-Agent': 'Boto/2.16.0 Python/2.7.3 Linux/3.8.0-29-generic', 'x-amz-copy-source': 'test131217%2F00Monitor_Event.jpg', 'Date': 'Tue, 17 Dec 2013 06:50:57 GMT', 'Content-Type': 'application/json; charset=UTF-8', 'Authorization': 'AWS G5DLUXD2HA07LDT10DRU:vph+BQC7y84LpkUP2khBVPvCtm4='} 
data="">
2013-12-17T14:50:57.898 15868:INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): rceph0.gcis3test.com 
2013-12-17T14:50:57.918 15868:DEBUG:requests.packages.urllib3.connectionpool:"PUT /test131217/00Monitor_Event.jpg?rgwx-op-id=rceph0%3A15649%3A1&rgwx-source-zone=us-east&rgwx-cl 
ient-id=radosgw-agent HTTP/1.1" 500 78 
2013-12-17T14:50:57.919 15868:DEBUG:radosgw_agent.worker:exception during sync: Http error code 500 content <?xml version="1.0" encoding="UTF-8"?><Error><Code>UnknownError</Cod 
e></Error> 
Tue, 17 Dec 2013 06:50:57 GMT 
/admin/opstate 
2013-12-17T14:50:57.921 15868:DEBUG:boto:Signature: 
AWS G5DLUXD2HA07LDT10DRU:MDqX5KbjX+2qWNQJFW7KT0wk5wM= 
2013-12-17T14:50:57.921 15868:DEBUG:boto:url = "" href="http://rceph0.gcis3test.com/admin/opstate">http://rceph0.gcis3test.com/admin/opstate'; 
params={'client-id': 'radosgw-agent', 'object': 'test131217/00Monitor_Event.jpg', 'op-id': 'rceph0:15649:1'} 
headers={'Date': 'Tue, 17 Dec 2013 06:50:57 GMT', 'Content-Length': '0', 'Authorization': 'AWS G5DLUXD2HA07LDT10DRU:MDqX5KbjX+2qWNQJFW7KT0wk5wM=', 'User-Agent': 'Boto/2.16.0 Py 
thon/2.7.3 Linux/3.8.0-29-generic'} 
data="">
2013-12-17T14:50:57.924 15868:INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): rceph0.gcis3test.com 
2013-12-17T14:50:57.928 15868:DEBUG:requests.packages.urllib3.connectionpool:"GET /admin/opstate?client-id=radosgw-agent&object=test131217%2F00Monitor_Event.jpg&op-id=rceph0%3A 
15649%3A1 HTTP/1.1" 200 None 
2013-12-17T14:50:57.930 15868:DEBUG:radosgw_agent.worker:op state is [{u'timestamp': u'2013-12-17 06:50:57.913110Z', u'op_id': u'rceph0:15649:1', u'object': u'test131217/00Moni 
tor_Event.jpg', u'state': u'error', u'client_id': u'radosgw-agent'}] 
2013-12-17T14:50:57.930 15868:ERROR:radosgw_agent.worker:failed to sync object test131217/00Monitor_Event.jpg: state is error 
2013-12-17T14:50:57.931 15868:DEBUG:radosgw_agent.worker:syncing object "test131217/1admin-backupset_create(1).jpg" 
2013-12-17T14:50:57.931 15868:DEBUG:radosgw_agent.worker:sync_object test131217/1admin-backupset_create(1).jpg 
2013-12-17T14:50:57.932 15868:DEBUG:boto:StringToSign:


 
Date: 2013-12-17 12:50
To: lin zhou
Subject: Re: radosgw-agent,sync "zone_info.us-east": Http error code 500 content
On Mon, Dec 16, 2013 at 8:22 PM, lin zhou<hnuzhoulin@xxxxxxxxx> wrote:
> Thanks for your reply.
> root@rceph0:~# radosgw-admin zone get --name client.radosgw.us-west-1
> { "domain_root": ".us-west.rgw.root",
>   "control_pool": ".us-west.rgw.control",
>   "gc_pool": ".us-west.rgw.gc",
>   "log_pool": ".us-west.log",
>   "intent_log_pool": ".us-west.intent-log",
>   "usage_log_pool": ".us-west.usage",
>   "user_keys_pool": ".us-west.users",
>   "user_email_pool": ".us-west.users.email",
>   "user_swift_pool": ".us-west.users.swift",
>   "user_uid_pool": ".us-west.users.uid",
>   "system_key": { "access_key": "G5DLUXD2HA07LDT10DRU",
>       "secret_key": "IPgisy2fW7WOX1xFqjtdPFR6fXPfupfDHEM4n4+H"},
>   "placement_pools": [
>         { "key": "default-placement",
>           "val": { "index_pool": ".us-west.rgw.buckets.index",
>               "data_pool": ".us-west.rgw.buckets"}}]}
>
> root pool setting in ceph.conf is below:
> [client.radosgw.us-west-1]
> rgw region = us
> rgw region root pool = .us.rgw.root
> rgw zone = us-west
> rgw zone root pool = .us-west.rgw.root
>
> or,can I delete this non-bucket metadata info ??
 
If you delete it you'd lose your zone and region configuration. Note
that you can use the region root pool for that purpose. So first copy
the relevant objects, e.g.,:
 
$ rados -p .us-west.rgw.root --target-pool=.us.rgw.root cp zone_info.us-west
 
and then you can remove them. But please make sure everything else
works before you remove them (e.g., you can still acess the zone).
 
Yehuda
 
>
> 2013/12/16 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>>
>> For some reason your bucket list seem to be returning some non-bucket
>> metadata info. Sounds like there's a mixup in the pools. What does
>> radosgw-admin zone get (for the us-west zone) return? What's your 'rgw
>> zone root pool' and 'rgw region root pool'?
>>
>> Yehuda
>>
>> On Sun, Dec 15, 2013 at 9:03 PM,  <hnuzhoulin@xxxxxxxxx> wrote:
>> > Hi,guys.
>> >
>> > I am using the character of geo-replication in ceph.
>> >
>> > I have two ceph clusters,so my plan is one region,in which two zones.
>> >
>> > Ceph version is ceph version 0.72.1
>> > (4d923861868f6a15dcb33fef7f50f674997322de)
>> >
>> >
>> >
>> >
>> >
>> > Now I can sync users and buckets from master zone to slave zone.
>> >
>> > But the object in bucket can not be synced.the error about object is:
>> >
>> > ERROR:radosgw_agent.worker:failed to sync object
>> > gci-replication-copytest1/628.png: state is error
>> >
>> >
>> >
>> > The following is the output when I run “radosgw-agent -c
>> > /etc/ceph/region-data-sync.conf --sync-scope full”:
>> >
>> >
>> >
>> > region map is: {u'us': [u'us-west', u'us-east']}
>> >
>> > INFO:root:syncing all metadata
>> >
>> > INFO:radosgw_agent.sync:Starting sync
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 33
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:1/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 5
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:2/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 6
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:3/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 1
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:4/19 items processed
>> >
>> > WARNING:radosgw_agent.worker:error getting metadata for bucket
>> > "zone_info.us-west": Http error code 500 content {"Code":"UnknownError"}
>> >
>> > Traceback (most recent call last):
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/worker.py", line
>> > 400,
>> > in sync_meta
>> >
>> >     metadata = client.get_metadata(self.src_conn, section, name)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 163,
>> > in get_metadata
>> >
>> >     params=dict(key=name))
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 155,
>> > in request
>> >
>> >     check_result_status(result)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 116,
>> > in check_result_status
>> >
>> >     HttpError)(result.status_code, result.content)
>> >
>> > HttpError: Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.sync:5/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 28
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:6/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 42
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > WARNING:radosgw_agent.worker:error getting metadata for bucket
>> > "zone_info.us-east": Http error code 500 content {"Code":"UnknownError"}
>> >
>> > Traceback (most recent call last):
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/worker.py", line
>> > 400,
>> > in sync_meta
>> >
>> >     metadata = client.get_metadata(self.src_conn, section, name)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 163,
>> > in get_metadata
>> >
>> >     params=dict(key=name))
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 155,
>> > in request
>> >
>> >     check_result_status(result)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 116,
>> > in check_result_status
>> >
>> >     HttpError)(result.status_code, result.content)
>> >
>> > HttpError: Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.sync:7/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 11
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 44
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:8/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 14
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:9/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 48
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:10/19 items processed
>> >
>> > INFO:radosgw_agent.sync:11/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 9
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 22
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:12/19 items processed
>> >
>> > INFO:radosgw_agent.sync:13/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 23
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:14/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 26
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 27
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.For some reason you have some non-bucket entries
>> > appear in your bucket list. What does radosgw-admin zone get
>> > --rgw-zone=us-west return?sync:15/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 60
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:16/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 61
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.sync:17/19 items processed
>> >
>> > WARNING:radosgw_agent.worker:error getting metadata for bucket
>> > "region_map":
>> > Http error code 500 content {"Code":"UnknownError"}
>> >
>> > Traceback (most recent call last):
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/worker.py", line
>> > 400,
>> > in sync_meta
>> >
>> >     metadata = client.get_metadata(self.src_conn, section, name)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 163,
>> > in get_metadata
>> >
>> >     params=dict(key=name))
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 155,
>> > in request
>> >
>> >     check_result_status(result)
>> >
>> >   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line
>> > 116,
>> > in check_result_status
>> >
>> >     HttpError)(result.status_code, result.content)
>> >
>> > HttpError: Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.sync:18/19 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 62
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 16
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry items: []
>> >
>> > INFO:radosgw_agent.worker:No more entries in queue, exiting
>> >
>> > INFO:radosgw_agent.sync:19/19 items processed
>> >
>> > INFO:root:syncing all data
>> >
>> > INFO:radosgw_agent.sync:waiting to make sure bucket log is consistent
>> >
>> > INFO:radosgw_agent.sync:Starting sync
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 34
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > []
>> >
>> > INFO:radosgw_agent.sync:1/7 items processed
>> >
>> > INFO:radosgw_agent.sync:2/7 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 35
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > []
>> >
>> > INFO:radosgw_agent.sync:3/7 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 42
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > []
>> >
>> > ERROR:radosgw_agent.worker:error preparing for full sync of bucket
>> > "zone_info.us-east": Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.sync:4/7 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 98
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > [u'zone_info.us-east']
>> >
>> > INFO:radosgw_agent.sync:5/7 items processed
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 82
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > []
>> >
>> > ERROR:radosgw_agent.worker:error preparing for full sync of bucket
>> > "zone_info.us-west": Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.worker:finished syncing shard 115
>> >
>> > INFO:radosgw_agent.worker:incremental sync will need to retry buckets:
>> > [u'zone_info.us-west']
>> >
>> > INFO:radosgw_agent.sync:6/7 items processed
>> >
>> > ERROR:radosgw_agent.worker:error preparing for full sync of bucket
>> > "region_map": Http error code 500 content {"Code":"UnknownError"}
>> >
>> > INFO:radosgw_agent.sync:7/7 items processed
>> >
>> > ERROR:radosgw_agent.sync:Encountered errors syncing these 1 shards:
>> > [u'region_map']
>> >
>> > INFO:root:Finished full sync. Check logs to see any issues that
>> > incremental
>> > sync will retry.
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux