Feedback: default FS pools and newfs behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In response to #8010[1], I'm looking at making it possible to
explicitly disable CephFS, so that the (often unused) filesystem pools
don't hang around if they're unwanted.

The administrative behavior would change such that:
 * To enable the filesystem it is necessary to create two pools and
use "ceph newfs <metadata> <data>"
 * There's a new "ceph rmfs" command to disable the filesystem and
allow removing its pools
 * Initially, the filesystem is disabled and the 'data' and 'metadata'
pools are not created by default

There's an initial cut of this on a branch:
https://github.com/ceph/ceph/commits/wip-nullfs

Questions:
 * Are there strong opinions about whether the CephFS pools should
exist by default?  I think it makes life simpler if they don't,
avoiding "what the heck is the 'data' pool?" type questions from
newcomers.
 * Is it too unfriendly to require users to explicitly create pools
before running newfs, or do we need to auto-create pools when they run
newfs?  Auto-creating some pools from newfs is a bit awkward
internally because it requires modifying both OSD and MDS maps in one
command.

Cheers,
John

1. http://tracker.ceph.com/issues/8010
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux