Re: Feedback: default FS pools and newfs behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Thanks for you work!
–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han@xxxxxxxxxxxx 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 21 May 2014, at 17:32, John Spray <john.spray@xxxxxxxxxxx> wrote:

> In response to #8010[1], I'm looking at making it possible to
> explicitly disable CephFS, so that the (often unused) filesystem pools
> don't hang around if they're unwanted.
> 
> The administrative behavior would change such that:
> * To enable the filesystem it is necessary to create two pools and
> use "ceph newfs <metadata> <data>"
> * There's a new "ceph rmfs" command to disable the filesystem and
> allow removing its pools
> * Initially, the filesystem is disabled and the 'data' and 'metadata'
> pools are not created by default
> 
> There's an initial cut of this on a branch:
> https://github.com/ceph/ceph/commits/wip-nullfs
> 
> Questions:
> * Are there strong opinions about whether the CephFS pools should
> exist by default?  I think it makes life simpler if they don't,
> avoiding "what the heck is the 'data' pool?" type questions from
> newcomers.

+1

> * Is it too unfriendly to require users to explicitly create pools
> before running newfs, or do we need to auto-create pools when they run
> newfs?  Auto-creating some pools from newfs is a bit awkward
> internally because it requires modifying both OSD and MDS maps in one
> command.

I believe it is legitimate to create these 2 pools. Having this part of the process looks good to me.

The only question I have is: why don’t we have a similar behaviour for the RBD pool?
Usually even if you need RBD, you will create another pool that matches a use case.

Cheers.

> 
> Cheers,
> John
> 
> 1. http://tracker.ceph.com/issues/8010
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux