Re: radosgw hammer -> jewel upgrade (default zone & region config)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx <jon@xxxxxxxxxxxxx> wrote:
> Hi All,
>
> I saw the previous thread on this related to
> http://tracker.ceph.com/issues/15597
>
> and Yehuda's fix script
> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
>
> Running this seems to have landed me in a weird state.
>
> I can create and get new buckets and objects but I've "lost" all my
> old buckets.  I'm fairly confident the "lost" data is in the
> .rgw.buckets pool but my current zone is set to use .rgw.buckets_
>
>
>
> root@ceph-mon0:~# radosgw-admin zone get
> {
>     "id": "default",
>     "name": "default",
>     "domain_root": ".rgw_",
>     "control_pool": ".rgw.control_",
>     "gc_pool": ".rgw.gc_",
>     "log_pool": ".log_",
>     "intent_log_pool": ".intent-log_",
>     "usage_log_pool": ".usage_",
>     "user_keys_pool": ".users_",
>     "user_email_pool": ".users.email_",
>     "user_swift_pool": ".users.swift_",
>     "user_uid_pool": ".users.uid_",
>     "system_key": {
>         "access_key": "",
>         "secret_key": ""
>     },
>     "placement_pools": [
>         {
>             "key": "default-placement",
>             "val": {
>                 "index_pool": ".rgw.buckets.index_",
>                 "data_pool": ".rgw.buckets_",
>                 "data_extra_pool": ".rgw.buckets.extra_",
>                 "index_type": 0
>             }
>         }
>     ],
>     "metadata_heap": "default.rgw.meta",
>     "realm_id": "a935d12f-14b7-4bf8-a24f-596d5ddd81be"
> }
>
>
> root@ceph-mon0:~# ceph osd pool ls |grep rgw|sort
> default.rgw.meta
> .rgw
> .rgw_
> .rgw.buckets
> .rgw.buckets_
> .rgw.buckets.index
> .rgw.buckets.index_
> .rgw.control
> .rgw.control_
> .rgw.gc
> .rgw.gc_
> .rgw.root
> .rgw.root.backup
>
> Should I just adjust the zone to use the pools without trailing
> slashes?  I'm a bit lost.  the last I could see from running the

Yes. The trailing slashes were needed when upgrading for 10.2.0, as
there was another bug, and I needed to add these to compensate for it.
I should update the script now to reflect that fix. You should just
update the json and set the zone appropriately.

Yehuda

> script didn't seem to indicate any errors (though I lost the to to
> scroll back buffer before i noticed the issue)
>
> Tail of output from running script:
> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
>
> + radosgw-admin zone set --rgw-zone=default
> zone id default{
>     "id": "default",
>     "name": "default",
>     "domain_root": ".rgw_",
>     "control_pool": ".rgw.control_",
>     "gc_pool": ".rgw.gc_",
>     "log_pool": ".log_",
>     "intent_log_pool": ".intent-log_",
>     "usage_log_pool": ".usage_",
>     "user_keys_pool": ".users_",
>     "user_email_pool": ".users.email_",
>     "user_swift_pool": ".users.swift_",
>     "user_uid_pool": ".users.uid_",
>     "system_key": {
>         "access_key": "",
>         "secret_key": ""
>     },
>     "placement_pools": [
>         {
>             "key": "default-placement",
>             "val": {
>                 "index_pool": ".rgw.buckets.index_",
>                 "data_pool": ".rgw.buckets_",
>                 "data_extra_pool": ".rgw.buckets.extra_",
>                 "index_type": 0
>             }
>         }
>     ],
>     "metadata_heap": "default.rgw.meta",
>     "realm_id": "a935d12f-14b7-4bf8-a24f-596d5ddd81be"
> }
> + radosgw-admin zonegroup default --rgw-zonegroup=default
> + radosgw-admin zone default --rgw-zone=default
> root@ceph-mon0:~# radosgw-admin region get --rgw-zonegroup=default
> {
>     "id": "default",
>     "name": "default",
>     "api_name": "",
>     "is_master": "true",
>     "endpoints": [],
>     "hostnames": [],
>     "hostnames_s3website": [],
>     "master_zone": "default",
>     "zones": [
>         {
>             "id": "default",
>             "name": "default",
>             "endpoints": [],
>             "log_meta": "false",
>             "log_data": "false",
>             "bucket_index_max_shards": 0,
>             "read_only": "false"        }
>     ],
>     "placement_targets": [
>         {
>             "name": "default-placement",
>             "tags": []
>         }
>     ],
>     "default_placement": "default-placement",
>     "realm_id": "a935d12f-14b7-4bf8-a24f-596d5ddd81be"}
>
> root@ceph-mon0:~# ceph -v
> ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
>
> Thanks,
> -Jon
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux