On Wed, Oct 9, 2019 at 10:45 AM Jeremi Avenant <jeremi@xxxxxxxxxx> wrote:
Good morningQ: Is it possible to have a 2nd cephfs_data volume and exposing it to the same openstack environment?
yes, see documentation for cephfs layouts: https://docs.ceph.com/docs/master/cephfs/file-layouts/
Reason being:Our current profile is configured with erasure code value of k=3,m=1 (rack level) but we looking to buy another +- 6PB of storage w/ controllers and was thinking of moving to an erasure profile of k=2,m=1 since we're not so big on data redundancy but more on disk space + performance.
Your new configuration uses *more* space than the old one. Also, m=1 is a bad idea.
For what I understand you can't change erasure profiles, therefor we need to essentially build a new ceph cluster but we're trying to understand if we can attach it to the existing openstack platform, then gradually move all the data over from the old cluster into the new cluster, destroy the old cluster and integrated it with the new one.
No, you can just create a new directory in cephfs in the new pool, see layout documentation linked above
If anyone has any recommendations to get more space out + performance at the cost of data redundancy with at least 1 rack please let me know as well.
Depends on how many racks you have. Common erasure coding setups are 4+2 for low redundancy and something like 8+3 for higher redundancy.
I'd never run a production setup with x+1 (but I guess it does depend on how much you care about availability vs. durability)
Paul
_______________________________________________Regards--Jeremi-Ernst Avenant, Mr.
Cloud Infrastructure SpecialistInter-University Institute for Data Intensive Astronomy5th Floor, Department of Physics and Astronomy,University of Cape TownTel: 021 959 4137Web: www.idia.ac.zaE-mail (IDIA): jeremi@xxxxxxxxxxRondebosch, Cape Town, 7600
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx