Re: Is it possible to have a 2nd cephfs_data volume? [Openstack]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Oct 9, 2019 at 10:45 AM Jeremi Avenant <jeremi@xxxxxxxxxx> wrote:
Good morning

Q: Is it possible to have a 2nd cephfs_data volume and exposing it to the same openstack environment?

yes, see documentation for cephfs layouts: https://docs.ceph.com/docs/master/cephfs/file-layouts/
 

Reason being:

Our current profile is configured with erasure code value of k=3,m=1 (rack level) but we looking to buy another +- 6PB of storage w/ controllers and was thinking of moving to an erasure profile of k=2,m=1 since we're not so big on data redundancy but more on disk space + performance.

Your new configuration uses *more* space than the old one. Also, m=1 is a bad idea.
 
For what I understand you can't change erasure profiles, therefor we need to essentially build a new ceph cluster but we're trying to understand if we can attach it to the existing openstack platform, then gradually move all the data over from the old cluster into the new cluster, destroy the old cluster and integrated it with the new one.

No, you can just create a new directory in cephfs in the new pool, see layout documentation linked above
 

If anyone has any recommendations to get more space out + performance at the cost of data redundancy with at least 1 rack please let me know as well.

Depends on how many racks you have. Common erasure coding setups are 4+2 for low redundancy and something like 8+3 for higher redundancy.
I'd never run a production setup with x+1 (but I guess it does depend on how much you care about availability vs. durability)


Paul
 

Regards
--



Jeremi-Ernst Avenant, Mr.
Cloud Infrastructure Specialist
Inter-University Institute for Data Intensive Astronomy
5th Floor, Department of Physics and Astronomy, 
University of Cape Town

E-mail (IDIA): jeremi@xxxxxxxxxx
Rondebosch, Cape Town, 7600
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux