Re: planning a new cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Depending on what your security requirements are, you may not have a choice.  If your OpenStack deployment shouldn't be able to load the Kubernetes RBDs (or vice versa), then you need to keep them separate and maintain different keyrings for the 2 services.  If that is going to be how you go about it, I would recommend starting with a relatively low number of PGs in both pools and figure out what the distribution of data between them ends up being by the time you're 40-50% full and increase PG counts accordingly.  If you can put them into the same pool, I don't see a reason why you shouldn't, unless you foresee a time when you want to move one of them, but not the other to a new cluster or faster storage.  Having them separate would allow you to change them to a different crush rule to put them on different storage in the same cluster and some sort of rados tool to copy a pool to a new cluster would do the other (less likely than possibly changing the crush rule for different types of storage).

On Mon, Feb 26, 2018 at 2:57 PM Frank Ritchie <frankaritchie@xxxxxxxxx> wrote:
Hi all,

I am planning for a new Ceph cluster that will provide RBD storage for OpenStack and Kubernetes. Additionally, there may need a need for a small amount of RGW storage.

Which option would be better:

1. Defining separate pools for OpenStack images/ephemeral vms/volumes/backups (as seen here https://ceph.com/pgcalc/) along with pools for Kubernetes and RGW.

2. Define a single block storage pool (to be used by OpenStack and Kubernetes) and an object pool (for RGW).

I am not sure how much space each component will require at this time.

thx
Frank 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux