Re: radosgw questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 1. To build a high performance yet cheap radosgw storage, which pools should
> be placed on ssd and which on hdd backed pools? Upon installation of
> radosgw, it created the following pools: .rgw, .rgw.buckets,
> .rgw.buckets.index, .rgw.control, .rgw.gc, .rgw.root, .usage, .users,
> .users.email.

There is a lot that goes into high performance, a few questions come to mind:

Do you want high performance reads, writes or both?
How hot is your data, can you bet better performance from buying more
memory for caching?
What size objects do you expect to handle, how many per bucket?

> 4. Which number of replaction would you suggest? In other words, which
> replication is need to achive 99.99999% durability like dreamobjects states?

DreamObjects Engineer here, we used Ceph's durability modeling tools here:

https://github.com/ceph/ceph-tools

You will need to research your data disk's MTBF numbers and convert
them to FITS, measure your OSD backfill MTTR and factor in your
replication count. DreamObjects uses 3 replicas on enterprise SAS
disks. The durability figures exclude black swan events like fires and
other such datacenter or regional disasters, which is why having a
second location is important for DR.

> 5. Is it possible to map fqdn custom domain to buckets, not only subdomains?

You could map a domain's A/AAAA records to an endpoint but if the
endpoint changes your SOL, using a CNAME at the domain root violates
DNS rfcs. Some DNS providers will fake a CNAME by doing a recursive
lookup in response to an A/AAAA request as a work around.

-- 

Kyle
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux