Re: v0.67 Dumpling released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2013 at 8:46 PM, Jeppesen, Nelson
<Nelson.Jeppesen@xxxxxxxxxx> wrote:
> Sage et al,
>
> This is an exciting release but I must say I'm a bit confused about some of the new rgw details.
>
> Questions:
>
> 1) I'd like to understand how regions work. I assume that's how you get multi-site, multi-datacenter support working but must they be part of the same ceph cluster still?

No, they don't need to be part of the same ceph cluster. We now define
a 'master' region that controls all metadata info, and 'secondary'
regions that sync that metadata in. In order for that to happen we
created new radosgw-agents that are responsible for syncing that data
(currently only metadata, but we'll be adding data sync capabilities
soon).

>
> 2) I have two independent zones (intranet and internet). Should they be put in the same region by setting 'rgw region root pool = blabla' ? I wasn't sure how placement_targets work.

Depends what the use case. In general you'd have multiple
radosgw-zones (not the zone that you were referring to) in the same
region if you want to have multiple copies of the same data on
different clusters / pools. At the moment there's no sync agent
support for that, so for now you shouldn't have more than one
radosgw-zone per region.
>
> 3) When I upgraded my rgw from .61 to .67 I lost access to my data. I used 'rgw_zone_root_pool' and noticed zone object changed from zone_info to zone_info.default. I did a 'rados cp zone_info zone_info.default -pool bIabla'. That fixed it but not sure if that's the correct fix.

Right. We might have missed that on the release notes. I think this
would only affect setups with non-default zone configuration. The zone
is named 'default', so the new object name is 'zone_info.default'.
Copying it was the right thing to do. You should remove the old
object.
>
> 4) In the zone_info.default I the following at the end :
>
> ..."system_key": { "access_key": "",
>       "secret_key": ""},
>   "placement_pools": []}
>
> What are these for exactly and should they be set? Or just a placeholder for E release?


These are two different configurables: the system key, which will be
used by the gateway to access other gateways (basically S3 access key
for a radosgw user that would have the 'system' flag set).

placement_pools: it is now possible to configure different data
placement pools for different buckets. For this you define a
'placement target' in the master region, for which you specify the
name of the target and the user tags for the users that can user that
placement:

...
  "placement_targets": [
        { "name": "slow",
          "tags": []},
        { "name": "fast",
          "tags": ["quick"]}],
  "default_placement": "slow"}

If tags are set it means that only users that have that specific tag
(int the user's placement_tags field in the user info) can create
buckets that use that placement target. It is possible to set a global
default placement (like in this example), and it is possible to set a
per-user default placement (can be set in the user info).

Once this is defined in the region, there's a per-zone configuration,
where we define the destination index and data pool for each placement
target. For example:

"placement_pools": [
        { "key": "fast",
          "val": { "index_pool": ".fast.pool.index",
              "data_pool": ".fast.pool.data"}},
        { "key": "slow",
          "val": { "index_pool": ".slow.pool.index",
              "data_pool": ".slow.pool.data"}}]

So in this example we have 4 different pools, 2 of which have their
data sent to a slow storage, and 2 of which have their data sent to a
fast storage. Also note that for each bucket we can now specify an
index pool (where the bucket index will be written) and a data pool
(where the bucket data will be written). This way we can put the
metadata on a low-latency expensive storage, while keeping the objects
data on a cheaper storage.

Once everything is configured, a user can select which placement
target to use when creating the bucket (through the S3 api currently,
for swift it can't be done yet). The user would set the
LocationConstraint param like this: [<region
name>][:<placement_target>].

Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux