radosgw-agent sync object:state is error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list
We deploied a master zone and a slave zone  in two clusters to test the multi-locals backup . The radosgw-anget sync buckets successfully .We can find the same buckets info in the slave zone .
But the running radosgw-agent throw out error info that  objects
failed to sync , just like that:
ERROR:radosgw_agent.worker:failed to sync object bucket-test4/s3stor.py: state is error
At first , I considered that it was owned to I forget to set the  "placement_pools" parm in the zone configure. After correction , it goes on .Zone configure :
 { "domain_root": ".us-east.rgw.root",
  "control_pool": ".us-east.rgw.control",
  "gc_pool": ".us-east.rgw.gc",
  "log_pool": ".us-east.log",
  "intent_log_pool": ".us-east.intent-log",
  "usage_log_pool": ".us-east.usage",
  "user_keys_pool": ".us-east.users",
  "user_email_pool": ".us-east.users.email",
  "user_swift_pool": ".us-east.users.swift",
  "user_uid_pool": ".us-east.users.uid",
  "system_key": { "access_key": "PSUXAQBOE0N60C0Y3QJ7", "secret_key": "l5peNL/nfTkAjl28uLw/WCKk2LSNa4hdS6VheJ6x"},
"placement_pools": [
      {  "key": "default-placement",
         "val": { "index_pool": ".rgw.buckets.index",
                  "data_pool": ".rgw.buckets"}
      }
    ]
}
 { "domain_root": ".us-west.rgw.root",
  "control_pool": ".us-west.rgw.control",
  "gc_pool": ".us-west.rgw.gc",
  "log_pool": ".us-west.log",
  "intent_log_pool": ".us-west.intent-log",
  "usage_log_pool": ".us-west.usage",
  "user_keys_pool": ".us-west.users",
  "user_email_pool": ".us-west.users.email",
  "user_swift_pool": ".us-west.users.swift",
  "user_uid_pool": ".us-west.users.uid",
  "system_key": { "access_key": "WUHDCDMWBG4GMT9B7QL7", "secret_key": "RSaYh90tNIdaImcn9QoSyK\/EuIrZSeXdOoa6Fw7o"},
"placement_pools": [
      {  "key": "default-placement",
         "val": { "index_pool": ".rgw.buckets.index",
                  "data_pool": ".rgw.buckets"}
      }
    ]
}
 
In slave zone, the .rgw.buckets's constent like this:

.dir.us-east.4513.1
.dir.us-east.4513.2
.dir.us-east.4513.3
.dir.us-east.4513.4
There's nothing about objects in slave zone ,which stored in the master zone.
The .rgw.buckets.index of slave zone is empty,while that of master zone contains some contents.
 .dir.default.4647.1
What the problem could it be? We wait for any suggestion from you ! 

lixuehui
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux