Yes, the reasoning is the number of PGs. I currently have all of my data stored in various RBDs in a pool and am planning to move most of it out of the RBDs into CephFS. The pool
would have the exact same use case that it does now, just with 90% of it's data in CephFS rather than RBDs. My osds aren't to a point of having too many PGs on them, I just wanted to mitigate the memory need of the osd processes.
From: Nick Fisk [nick@xxxxxxxxxx]
Sent: Saturday, January 07, 2017 3:21 PM To: David Turner; ceph-users@xxxxxxxxxxxxxx Subject: RE: cephfs AND rbds Technically I think there is no reason why you couldn’t do this, but I think it is unadvisable. There was a similar thread a while back where somebody had done this and it caused problems when he was trying to do maintenance/recovery further down the line.
I’m assuming you want to do this because you have already created a pool with the max number of PG’s per OSD and extra pools would take you further over this limit? If it’s the case I would just bump up the limit, it’s not worth the risk.
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of David Turner
Can cephfs and rbds use the same pool to store data? I know you would need a separate metadata pool for cephfs, but could they share the same data pool?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com