=?gb18030?b?u9i4tKO6UmU6IGFib3V0IHJndyByZWdpb24gc3lu?==?gb18030?q?c?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi:
   Everyone!
  I am trying to setup replication on two clusters right now.
  please  go through my previous steps.
  on the slave zone(cluster),     when i do the follow command, I  found som errors: 
 
radosgw-admin user stats --uid="us-test-east" --sync-stats  --name client.radosgw.us-west-1
2015-05-14 10:10:22.889101 7f4772f228a0 20 get_obj_state: rctx=0xdedfb0 obj=.us.rgw.root:region_info.us state=0xde89d8 s->prefetch_data=0
2015-05-14 10:10:22.889126 7f4772f228a0 10 cache get: name=.us.rgw.root+region_info.us : miss
2015-05-14 10:10:22.889682 7f475ebfd700  2 RGWDataChangesLog::ChangesRenewThread: start
2015-05-14 10:10:22.891327 7f4772f228a0 10 cache put: name=.us.rgw.root+region_info.us
2015-05-14 10:10:22.891347 7f4772f228a0 10 adding .us.rgw.root+region_info.us to cache LRU end
2015-05-14 10:10:22.891356 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.891369 7f4772f228a0 10 cache get: name=.us.rgw.root+region_info.us : type miss (requested=1, cached=6)
2015-05-14 10:10:22.891380 7f4772f228a0 20 get_obj_state: rctx=0xdedfb0 obj=.us.rgw.root:region_info.us state=0xde89d8 s->prefetch_data=0
2015-05-14 10:10:22.891387 7f4772f228a0 10 cache get: name=.us.rgw.root+region_info.us : hit
2015-05-14 10:10:22.891392 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.891409 7f4772f228a0 20 get_obj_state: rctx=0xdedfb0 obj=.us.rgw.root:region_info.us state=0xde89d8 s->prefetch_data=0
2015-05-14 10:10:22.891413 7f4772f228a0 20 state for obj=.us.rgw.root:region_info.us is not atomic, not appending atomic test
2015-05-14 10:10:22.891418 7f4772f228a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288
2015-05-14 10:10:22.892274 7f4772f228a0 20 rados->read r=0 bl.length=383
2015-05-14 10:10:22.892299 7f4772f228a0 10 cache put: name=.us.rgw.root+region_info.us
2015-05-14 10:10:22.892302 7f4772f228a0 10 moving .us.rgw.root+region_info.us to cache LRU end
2015-05-14 10:10:22.892346 7f4772f228a0 20 get_obj_state: rctx=0xde8d50 obj=.us-west.rgw.root:zone_info.us-west state=0xdf1178 s->prefetch_data=0
2015-05-14 10:10:22.892367 7f4772f228a0 10 cache get: name=.us-west.rgw.root+zone_info.us-west : miss
2015-05-14 10:10:22.893191 7f4772f228a0 10 cache put: name=.us-west.rgw.root+zone_info.us-west
2015-05-14 10:10:22.893198 7f4772f228a0 10 adding .us-west.rgw.root+zone_info.us-west to cache LRU end
2015-05-14 10:10:22.893204 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.893212 7f4772f228a0 10 cache get: name=.us-west.rgw.root+zone_info.us-west : type miss (requested=1, cached=6)
2015-05-14 10:10:22.893221 7f4772f228a0 20 get_obj_state: rctx=0xdf11c0 obj=.us-west.rgw.root:zone_info.us-west state=0xdf3168 s->prefetch_data=0
2015-05-14 10:10:22.893227 7f4772f228a0 10 cache get: name=.us-west.rgw.root+zone_info.us-west : hit
2015-05-14 10:10:22.893232 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.893240 7f4772f228a0 20 get_obj_state: rctx=0xdf11c0 obj=.us-west.rgw.root:zone_info.us-west state=0xdf3168 s->prefetch_data=0
2015-05-14 10:10:22.893243 7f4772f228a0 20 state for obj=.us-west.rgw.root:zone_info.us-west is not atomic, not appending atomic test
2015-05-14 10:10:22.893246 7f4772f228a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288
2015-05-14 10:10:22.897009 7f4772f228a0 20 rados->read r=0 bl.length=997
2015-05-14 10:10:22.897034 7f4772f228a0 10 cache put: name=.us-west.rgw.root+zone_info.us-west
2015-05-14 10:10:22.897038 7f4772f228a0 10 moving .us-west.rgw.root+zone_info.us-west to cache LRU end
2015-05-14 10:10:22.897064 7f4772f228a0  2 zone us-west is NOT master
2015-05-14 10:10:22.897085 7f4772f228a0 20 get_obj_state: rctx=0xdf3e20 obj=.us-west.rgw.root:region_map state=0xdf4458 s->prefetch_data=0
2015-05-14 10:10:22.897094 7f4772f228a0 10 cache get: name=.us-west.rgw.root+region_map : miss
2015-05-14 10:10:22.899577 7f4772f228a0 10 cache put: name=.us-west.rgw.root+region_map
2015-05-14 10:10:22.899587 7f4772f228a0 10 adding .us-west.rgw.root+region_map to cache LRU end
2015-05-14 10:10:22.899594 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.899602 7f4772f228a0 10 cache get: name=.us-west.rgw.root+region_map : type miss (requested=1, cached=6)
2015-05-14 10:10:22.899611 7f4772f228a0 20 get_obj_state: rctx=0xdf3e20 obj=.us-west.rgw.root:region_map state=0xdf4458 s->prefetch_data=0
2015-05-14 10:10:22.899617 7f4772f228a0 10 cache get: name=.us-west.rgw.root+region_map : hit
2015-05-14 10:10:22.899622 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.899630 7f4772f228a0 20 get_obj_state: rctx=0xdf3e20 obj=.us-west.rgw.root:region_map state=0xdf4458 s->prefetch_data=0
2015-05-14 10:10:22.899634 7f4772f228a0 20 state for obj=.us-west.rgw.root:region_map is not atomic, not appending atomic test
2015-05-14 10:10:22.899637 7f4772f228a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288
2015-05-14 10:10:22.900572 7f4772f228a0 20 rados->read r=0 bl.length=451
2015-05-14 10:10:22.900593 7f4772f228a0 10 cache put: name=.us-west.rgw.root+region_map
2015-05-14 10:10:22.900596 7f4772f228a0 10 moving .us-west.rgw.root+region_map to cache LRU end
2015-05-14 10:10:22.978346 7f4772f228a0 20 generating connection object for zone us-east
2015-05-14 10:10:22.978462 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe003e8 s->prefetch_data=0
2015-05-14 10:10:22.978478 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : miss
2015-05-14 10:10:22.983234 7f4772f228a0 10 cache put: name=.us-west.users.uid+us-test-east
2015-05-14 10:10:22.983254 7f4772f228a0 10 adding .us-west.users.uid+us-test-east to cache LRU end
2015-05-14 10:10:22.983262 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.983271 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : type miss (requested=17, cached=22)
2015-05-14 10:10:22.983315 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe02408 s->prefetch_data=0
2015-05-14 10:10:22.983322 7f4772f228a0 10 cache get: name=.us-west.users.uid+us-test-east : hit
2015-05-14 10:10:22.983328 7f4772f228a0 20 get_obj_state: s->obj_tag was set empty
2015-05-14 10:10:22.983338 7f4772f228a0 20 get_obj_state: rctx=0xe00310 obj=.us-west.users.uid:us-test-east state=0xe02408 s->prefetch_data=0
2015-05-14 10:10:22.983341 7f4772f228a0 20 state for obj=.us-west.users.uid:us-test-east is not atomic, not appending atomic test
2015-05-14 10:10:22.983377 7f4772f228a0 20 rados->read obj-ofs=0 read_ofs=0 read_len=524288
2015-05-14 10:10:22.984442 7f4772f228a0 20 rados->read r=0 bl.length=522
2015-05-14 10:10:22.984469 7f4772f228a0 10 cache put: name=.us-west.users.uid+us-test-east
2015-05-14 10:10:22.984472 7f4772f228a0 10 moving .us-west.users.uid+us-test-east to cache LRU end
ERROR: failed to sync user stats: (2) No such file or directory
2015-05-14 10:10:22.986952 7f4772f228a0 20 cls_bucket_header() returned -2
2015-05-14 10:10:22.986965 7f4772f228a0  0 ERROR: could not sync bucket stats: ret=-2
 
 


------------------ 原始邮件 ------------------
发件人: "316828252";<316828252@xxxxxx>;
发送时间: 2015年5月13日(星期三) 上午6:31
收件人: "clewis"<clewis@xxxxxxxxxxxxxxxxxx>;
抄送: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
主题: 回复:Re: about rgw region sync

please give me some advice, thanks

在 刘俊 <316828252@xxxxxx>,2015年5月13日 上午12:29写道:

no,i set up replication between two clusters,each cluster has  one zone, both clusters are in the same region. but  i got some errors.

在 Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx>,2015年5月13日 上午12:02写道:

Are you trying to setup replication on one cluster right now?

Generally replication is setup between two different clusters, each having one zone.  Both clusters are in the same region.  

I can't think of a reason why two zones in one cluster wouldn't work.  It's more complicated to setup though.  Anything outside of a test setup would need a lot of planning to make sure the two zones are as fault isolated as possible. I'm pretty sure you need separate RadosGW nodes for each zone.  It could be possible to share, but it will be easier if you don't.


I still haven't gone through your previous logs carefully.

On Tue, May 12, 2015 at 6:46 AM, TERRY <316828252@xxxxxx> wrote:
could i build one region using two clusters, each cluster has one zone。 so that I sync metadata and data from one cluster to another cluster。
I build two ceph clusters.
for the first cluster, I do the follow steps
1.create pools
sudo ceph osd pool create .us-east.rgw.root 64  64
sudo ceph osd pool create .us-east.rgw.control 64 64
sudo ceph osd pool create .us-east.rgw.gc 64 64
sudo ceph osd pool create .us-east.rgw.buckets 64 64
sudo ceph osd pool create .us-east.rgw.buckets.index 64 64
sudo ceph osd pool create .us-east.rgw.buckets.extra 64 64
sudo ceph osd pool create .us-east.log 64 64
sudo ceph osd pool create .us-east.intent-log 64 64
sudo ceph osd pool create .us-east.usage 64 64
sudo ceph osd pool create .us-east.users 64 64
sudo ceph osd pool create .us-east.users.email 64 64
sudo ceph osd pool create .us-east.users.swift 64 64
sudo ceph osd pool create .us-east.users.uid 64 64
 
2.create a keyring
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-east-1 --gen-key
sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring

3.create a region
sudo radosgw-admin region set --infile us.json --name client.radosgw.us-east-1
sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-east-1
sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
   the content of us.json:
cat us.json
{ "name": "us",
  "api_name": "us",
  "is_master": "true",
  "endpoints": [
        "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/", "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"],
  "master_zone": "us-east",
  "zones": [
        { "name": "us-east",
          "endpoints": [
                "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/"],
          "log_meta": "true",
          "log_data": "true"},
        { "name": "us-west",
          "endpoints": [
                "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"],
          "log_meta": "true",
          "log_data": "true"}],
  "placement_targets": [
   {
     "name": "default-placement",
     "tags": []
   }
  ],
  "default_placement": "default-placement"}
4.create zones
sudo radosgw-admin zone set --rgw-zone=us-east --infile us-east-secert.json --name client.radosgw.us-east-1
sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
cat us-east-secert.json
{ "domain_root": ".us-east.domain.rgw",
  "control_pool": ".us-east.rgw.control",
  "gc_pool": ".us-east.rgw.gc",
  "log_pool": ".us-east.log",
  "intent_log_pool": ".us-east.intent-log",
  "usage_log_pool": ".us-east.usage",
  "user_keys_pool": ".us-east.users",
  "user_email_pool": ".us-east.users.email",
  "user_swift_pool": ".us-east.users.swift",
  "user_uid_pool": ".us-east.users.uid",
  "system_key": { "access_key": "XNK0ST8WXTMWZGN29NF9", "secret_key": "7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"},
  "placement_pools": [
    { "key": "default-placement",
      "val": { "index_pool": ".us-east.rgw.buckets.index",
               "data_pool": ".us-east.rgw.buckets"}
    }
  ]
}

#5 Create Zone Users system user
sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-east-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
#6 creat zone users not system user
sudo radosgw-admin user create --uid="us-test-east" --display-name="Region-US Zone-East-test" --name client.radosgw.us-east-1 --access_key="DDK0ST8WXTMWZGN29NF9" --secret="DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"
#7 subuser create
sudo radosgw-admin subuser create --uid="us-test-east"  --subuser="us-test-east:swift" --access=full --name client.radosgw.us-east-1 --key-type swift --secret="ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"
sudo /etc/init.d/ceph -a restart
sudo /etc/init.d/httpd re
sudo /etc/init.d/ceph-radosgw restart
 
for the  second cluster, I do the follow steps
1.create pools
sudo ceph osd pool create .us-west.rgw.root 64  64
sudo ceph osd pool create .us-west.rgw.control 64 64
sudo ceph osd pool create .us-west.rgw.gc 64 64
sudo ceph osd pool create .us-west.rgw.buckets 64 64
sudo ceph osd pool create .us-west.rgw.buckets.index 64 64
sudo ceph osd pool create .us-west.rgw.buckets.extra 64 64
sudo ceph osd pool create .us-west.log 64 64
sudo ceph osd pool create .us-west.intent-log 64 64
sudo ceph osd pool create .us-west.usage 64 64
sudo ceph osd pool create .us-west.users 64 64
sudo ceph osd pool create .us-west.users.email 64 64
sudo ceph osd pool create .us-west.users.swift 64 64
sudo ceph osd pool create .us-west.users.uid 64 64
2.create a keyring
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.us-west-1 --gen-key
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth del client.radosgw.us-west-1
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring

3.create a region
sudo radosgw-admin region set --infile us.json --name client.radosgw.us-west-1
sudo radosgw-admin region default --rgw-region=us --name client.radosgw.us-west-1
sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
cat us.json
the content of us.json:
{ "name": "us",
  "api_name": "us",
  "is_master": "true",
  "endpoints": [
        "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/", "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"],
  "master_zone": "us-east",
  "zones": [
        { "name": "us-east",
          "endpoints": [
                "http:\/\/WH-CEPH-TEST01.MATRIX.CTRIPCORP.COM:80\/"],
          "log_meta": "true",
          "log_data": "true"},
        { "name": "us-west",
          "endpoints": [
                "http:\/\/WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM:80\/"],
          "log_meta": "true",
          "log_data": "true"}],
  "placement_targets": [
   {
     "name": "default-placement",
     "tags": []
   }
  ],
  "default_placement": "default-placement"}

4.create zones
sudo radosgw-admin zone set --rgw-zone=us-west --infile us-west-secert.json --name client.radosgw.us-west-1
sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
the content of us-west-secert.json is:
cat us-west-secert.json
{ "domain_root": ".us-east.domain.rgw",
  "control_pool": ".us-east.rgw.control",
  "gc_pool": ".us-east.rgw.gc",
  "log_pool": ".us-east.log",
  "intent_log_pool": ".us-east.intent-log",
  "usage_log_pool": ".us-east.usage",
  "user_keys_pool": ".us-east.users",
  "user_email_pool": ".us-east.users.email",
  "user_swift_pool": ".us-east.users.swift",
  "user_uid_pool": ".us-east.users.uid",
  "system_key": { "access_key": "XNK0ST8WXTMWZGN29NF9", "secret_key": "7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"},
  "placement_pools": [
    { "key": "default-placement",
      "val": { "index_pool": ".us-east.rgw.buckets.index",
               "data_pool": ".us-east.rgw.buckets"}
    }
  ]
}
5.Create Zone Users system user
sudo radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-west-1 --access_key="XNK0ST8WXTMWZGN29NF9" --secret="7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
sudo radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --access_key="AAK0ST8WXTMWZGN29NF9" --secret="AAJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5" --system
6.reboot
sudo /etc/init.d/ceph -a restart
sudo /etc/init.d/httpd restart
sudo /etc/init.d/ceph-radosgw restart
after all of above, on the first cluster, i do the follow steps
1.source self.env
the content of self.env is :
cat self.env
export ST_AUTH="http://10.18.5.49/auth/1.0"
export ST_KEY=ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
export ST_USER=us-test-east:swift
2.swift list
3.swift  upload test  self.env
4.swift list test
self.env
5.sudo radosgw-agent -c ./ceph-data-sync.conf
the content of ceph-data-sync.conf is:
cat ceph-data-sync.conf
src_access_key: XNK0ST8WXTMWZGN29NF9
src_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
destination: http://WH-CEPH-TEST02.MATRIX.CTRIPCORP.COM
dest_access_key: XNK0ST8WXTMWZGN29NF9
dest_secret_key: 7VJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
there is some error as bellow:
sudo radosgw-agent -c ./ceph-data-sync.conf
region map is: {u'us': [u'us-west', u'us-east']}
INFO:radosgw_agent.sync:Starting sync
INFO:radosgw_agent.worker:24062 is processing shard number 0
INFO:radosgw_agent.worker:finished processing shard 0
INFO:radosgw_agent.worker:24062 is processing shard number 1
INFO:radosgw_agent.sync:1/64 items processed
INFO:radosgw_agent.worker:finished processing shard 1
INFO:radosgw_agent.sync:2/64 items processed
INFO:radosgw_agent.worker:24062 is processing shard number 2
INFO:radosgw_agent.worker:finished processing shard 2
INFO:radosgw_agent.sync:3/64 items processed
INFO:radosgw_agent.worker:24062 is processing shard number 3
INFO:radosgw_agent.worker:finished processing shard 3
INFO:radosgw_agent.sync:4/64 items processed
INFO:radosgw_agent.worker:24062 is processing shard number 4
...
...
...
INFO:radosgw_agent.worker:syncing bucket "test"
ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error
INFO:radosgw_agent.worker:syncing bucket "test"
ERROR:radosgw_agent.worker:failed to sync object test/self.env: state is error

INFO:radosgw_agent.worker:finished processing shard 69

on the second cluster ,i do the follow steps:
1.source self.env
the content of self.env is :
cat self.env
export ST_AUTH="http://10.18.5.51/auth/1.0"
export ST_KEY=ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5
export ST_USER=us-test-east:swift
2.swift list
Auth GET failed: http://10.18.5.51/auth/1.0 403 Forbidden 
3.radosgw-admin --name client.radosgw.us-west-1 user info --uid="us-test-east"
{ "user_id": "us-test-east",
  "display_name": "Region-US Zone-East-test",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [
        { "id": "us-test-east:swift",
          "permissions": "full-control"}],
  "keys": [
        { "user": "us-test-east",
          "access_key": "DDK0ST8WXTMWZGN29NF9",
          "secret_key": "DDJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}],
  "swift_keys": [
        { "user": "us-test-east:swift",
          "secret_key": "ffJm8uAp71xKQZkjoPZmHu4sACA1SY8jTjay9dP5"}],
  "caps": [],
  "op_mask": "read, write, delete",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": { "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1},
  "user_quota": { "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1},
  "temp_url_keys": []}
4.radosgw-admin --name client.radosgw.us-west-1 bucket list
[
    "test"]
5.radosgw-admin --name client.radosgw.us-west-1 --bucket=test  bucket list
[]

it seems like that metadata is replicated from the first cluster, data is not。
I don't known why?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux