Re: cephfs AND rbds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think your best approach would be to create a smaller RBD pool and then migrate the 10% of RBD’s that will remain RBD’s into this and then use the old pool for just CephFS.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of David Turner
Sent: 07 January 2017 23:55
To: nick@xxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
Subject: Re: cephfs AND rbds

 

Yes, the reasoning is the number of PGs.  I currently have all of my data stored in various RBDs in a pool and am planning to move most of it out of the RBDs into CephFS.  The pool would have the exact same use case that it does now, just with 90% of it's data in CephFS rather than RBDs.  My osds aren't to a point of having too many PGs on them, I just wanted to mitigate the memory need of the osd processes.


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.



From: Nick Fisk [nick@xxxxxxxxxx]
Sent: Saturday, January 07, 2017 3:21 PM
To: David Turner; ceph-users@xxxxxxxxxxxxxx
Subject: RE: cephfs AND rbds

Technically I think there is no reason why you couldn’t do this, but I think it is unadvisable. There was a similar thread a while back where somebody had done this and it caused problems when he was trying to do maintenance/recovery further down the line.

 

I’m assuming you want to do this because you have already created a pool with the max number of PG’s per OSD and extra pools would take you further over this limit? If it’s the case I would just bump up the limit, it’s not worth the risk.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of David Turner
Sent: 07 January 2017 00:54
To: ceph-users@xxxxxxxxxxxxxx
Subject: cephfs AND rbds

 

Can cephfs and rbds use the same pool to store data?  I know you would need a separate metadata pool for cephfs, but could they share the same data pool?


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.



Image removed by sender.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux