Re: radosgw questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For #2, I just wrote a document on setting up a federated
architecture. You can view it here:
http://ceph.com/docs/master/radosgw/federated-config/ This
functionality will be available in the Emperor release.

The use case I described involved two zones in a master region talking
to the same underlying Ceph Storage Cluster, but with different sets
of pools for each zone. You can certainly set up pools for zones on
completely different Ceph Storage Clusters. I assumed that was
overkill, but you can certainly do it. See
http://ceph.com/docs/master/radosgw/federated-config/#configure-a-master-region
for configuring a master region.

If you want to use separate storage clusters for each zone, you need to:

1. Setup the set of pools for each zone in the respective ceph storage
cluster for your data center.
2. http://ceph.com/docs/master/radosgw/federated-config/#create-a-keyring
should use different cluster names to ensure that the keyring gets
populated in both Ceph Storage Clusters. We assume the default -c
/etc/ceph/ceph.conf for simplicity.
3. http://ceph.com/docs/master/radosgw/federated-config/#add-instances-to-ceph-config-file
when adding the instances to the Ceph configuration file, you need to
note that the storage cluster might be named. For example, instead of
ceph.conf, it might be us-west.conf and us-east.conf for the
respective zones, assuming you are setting up Ceph clusters
specifically to run the gateways--or whatever naming convention you
already use.

4. Most of the usage examples omit the Ceph configuration file (-c
file/path.conf) and the admin key (-k path/to/admin.keyring). You may
need to specify them explicitly when calling radosgw-admin so that you
are issuing commands to the right Ceph Storage Cluster.

I'd love to get your feedback on the document!

For #3. Yes. In fact, if you just setup a master region with one
master zone, that works fine. You don't have to "respect" pool naming.
Whatever you create in the storage cluster and map to a zone pool will
work. However, I would suggest following the conventions as laid out
in the document. You can create a garbage collection pool called
"lemonade", but you will probably confuse the community when looking
for help as they will expect .{region-name}-{zone-name}.rgw.gc. If you
just use region-zone.{pool-name-default}, like us-west.rgw.root most
people in the community will understand any questions you have and can
more readily help you with additional questions.




On Wed, Nov 6, 2013 at 3:17 AM, Alessandro Brega
<alessandro.brega1@xxxxxxxxx> wrote:
> Good day ceph users,
>
> I'm new to ceph but installation went well so far. Now I have a lot of
> questions regarding radosgw. Hope you don't mind...
>
> 1. To build a high performance yet cheap radosgw storage, which pools should
> be placed on ssd and which on hdd backed pools? Upon installation of
> radosgw, it created the following pools: .rgw, .rgw.buckets,
> .rgw.buckets.index, .rgw.control, .rgw.gc, .rgw.root, .usage, .users,
> .users.email.
>
> 2. In order to have very high availability I like to setup two different
> ceph clusters, each in its own datacenter. How to configure radowsgw to make
> use of this layout? Can I have a multi-master setup with having a load
> balancer (or using geo-dns) which distributes the load to radosgw instances
> in both datacenters?
>
> 3. Is it possible to start with a simple setup now (only one ceph cluster)
> and later add the multi-datacenter redundancy described above without
> downtime? Do I have to respect any special pool-naming requirements?
>
> 4. Which number of replaction would you suggest? In other words, which
> replication is need to achive 99.99999% durability like dreamobjects states?
>
> 5. Is it possible to map fqdn custom domain to buckets, not only subdomains?
>
> 6. The command "radosgw-admin pool list" returns "could not list placement
> set: (2) No such file or directory". But radosgw seems to work as expected
> anyway?
>
> Looking forward to your suggestions.
>
> Alessandro Brega
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux