Radosgw agent only syncing metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am following http://docs.ceph.com/docs/master/radosgw/federated-config/ with giant (0.88-340-g5bb65b3). I figured I'd do the simple case first:

- 1 region
- 2 zones (us-east, us-west) master us-east
- 2 radosgw instances (client.radosgw.us-east-1, wclient.radosgw.us-west-1)
- 1 ceph cluster, (1 mon 2 osd)
- 2 hosts (Ubuntu 14.04 VMs) ceph1, ceph2
- sync data us-east -> us-west

To check the sync data I created a non system user (markir) and had it create a bucket (bucketbig) with some contents (big.dat) in us-east zone.

Starting radosgw-agent to sync us-east -> us-west I see in the log:


2014-11-21T13:49:05.897 24280:INFO:radosgw_agent.worker:bucket instance "bucketbig:us-east.4497.1" has 9 entries after "" 2014-11-21T13:49:05.898 24280:INFO:radosgw_agent.worker:syncing bucket "bucketbig"

which leads me to think it is working. However there is nothing in the .us-west.rgw.buckets pool.

Looking a bit closer I see that .us-west.rgw.buckets.index has 1 entry (as does us-east.rgw.buckets.index), so looks like metadata is being sync'd (in fact checking users etc metadata clearly is being sync'd).

Here's my radosgw-agent config file:

$ cat region-data-sync.conf
src_zone: us-east
source: http://ceph2:80
src_access_key: us-east key
src_secret_key: the secret
dest_zone: us-west
destination: http://ceph1:80
dest_access_key: us-west key
dest_secret_key: the secret
log_file: /var/log/radosgw/radosgw-sync-us-east-west.log

As far as I am aware I've checked the list of gotchas discussed previously https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg14422.html and seems all ok.

Specifically here's my region and zone json:

$ cat us,json
{ "name": "us",
  "api_name": "us",
  "is_master": "true",
  "endpoints": [
        "http:\/\/ceph2:80\/", "http:\/\/ceph1:80\/" ],
  "master_zone": "us-east",
  "zones": [
        { "name": "us-east",
          "endpoints": [
                "http:\/\/ceph2:80\/"],
          "log_meta": "true",
          "log_data": "true"},
        { "name": "us-west",
          "endpoints": [
                "http:\/\/ceph1:80\/"],
          "log_meta": "true",
          "log_data": "true"}],
  "placement_targets": [
   {
     "name": "default-placement",
     "tags": []
   }
  ],
  "default_placement": "default-placement"}

$ cat us-east.json
{ "domain_root": ".us-east.domain.rgw",
  "control_pool": ".us-east.rgw.control",
  "gc_pool": ".us-east.rgw.gc",
  "log_pool": ".us-east.log",
  "intent_log_pool": ".us-east.intent-log",
  "usage_log_pool": ".us-east.usage",
  "user_keys_pool": ".us-east.users",
  "user_email_pool": ".us-east.users.email",
  "user_swift_pool": ".us-east.users.swift",
  "user_uid_pool": ".us-east.users.uid",
  "system_key": { "access_key": "us-east key", "secret_key": "the secret"},
  "placement_pools": [
    { "key": "default-placement",
      "val": { "index_pool": ".us-east.rgw.buckets.index",
               "data_pool": ".us-east.rgw.buckets"}
    }
  ]
}

$ cat us-west.json
{ "domain_root": ".us-west.domain.rgw",
  "control_pool": ".us-west.rgw.control",
  "gc_pool": ".us-west.rgw.gc",
  "log_pool": ".us-west.log",
  "intent_log_pool": ".us-west.intent-log",
  "usage_log_pool": ".us-west.usage",
  "user_keys_pool": ".us-west.users",
  "user_email_pool": ".us-west.users.email",
  "user_swift_pool": ".us-west.users.swift",
  "user_uid_pool": ".us-west.users.uid",
  "system_key": { "access_key": "us-west key", "secret_key": "the secret"},
  "placement_pools": [
    { "key": "default-placement",
      "val": { "index_pool": ".us-west.rgw.buckets.index",
               "data_pool": ".us-west.rgw.buckets"}
    }
  ]
}


The only things that look odd in the destination zone logs are 383 requests getting 404 rather than 200:

$ grep "http_status=404" ceph-client.radosgw.us-west-1.log
...
2014-11-21 13:48:58.435201 7ffc4bf7f700 1 ====== req done req=0x7ffca002df00 http_status=404 ====== 2014-11-21 13:49:05.891680 7ffc35752700 1 ====== req done req=0x7ffca00301e0 http_status=404 ======
...

at around the same time as the bucket was being allegedly sync'd.

Any thoughts appreciated.

regards

Mark

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux