Is it possible to have a 2nd cephfs_data volume? [Openstack]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good morning

Q: Is it possible to have a 2nd cephfs_data volume and exposing it to the same openstack environment?

Reason being:

Our current profile is configured with erasure code value of k=3,m=1 (rack level) but we looking to buy another +- 6PB of storage w/ controllers and was thinking of moving to an erasure profile of k=2,m=1 since we're not so big on data redundancy but more on disk space + performance.
For what I understand you can't change erasure profiles, therefor we need to essentially build a new ceph cluster but we're trying to understand if we can attach it to the existing openstack platform, then gradually move all the data over from the old cluster into the new cluster, destroy the old cluster and integrated it with the new one.

If anyone has any recommendations to get more space out + performance at the cost of data redundancy with at least 1 rack please let me know as well.

Regards
--



Jeremi-Ernst Avenant, Mr.
Cloud Infrastructure Specialist
Inter-University Institute for Data Intensive Astronomy
5th Floor, Department of Physics and Astronomy, 
University of Cape Town

E-mail (IDIA): jeremi@xxxxxxxxxx
Rondebosch, Cape Town, 7600
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux