Having found that our full space handling in CephFS wasn't working right now[1], there was some discussion on the CephFS standup about how to improve the free space handling in a more general way. Currently (once #7780 is fixed), we just give the MDS a pass on all the fullness checks, so that it can journal file deletions to free up space. This is a halfway solution because there are still ways for the MDS to fill up the remaining space with operations other than deletions, especially with the advent of inlining for small files. It's also hacky, because there is code inside the OSD that special cases writes from the MDS. Changes discussed =============== In the CephFS layer: * we probably need to do some work to blacklist client requests other than housekeeping and deletions, when we are in a low-space situation. In the RADOS layer: * Per-pool full flag: For situations where metadata+data pools are on separate OSDs, a per-pool full flag (instead of the current global one), so that we can distinguish between situations where we need to be conservative with metadata operations (low space on metadata pool) vs situations where only client data IO is blocked (low space on data pool). This seems fairly uncontroversial, as the current global full flag doesn't reflect that different pools can be on entirely separate storage. * Per-pool full ratio: For situations where metadata+data pools are on the same OSDs, separate full ratios per pool, so that once the data pool's threshold is reached, the remaining reserved space is given over to the metadata pool (assuming metadata pool has a higher full ratio, possibly just set to 100%). Throwing it out to the list for thoughts. Cheers, John 1. http://tracker.ceph.com/issues/7780 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html