On Fri, 16 Jan 2015, Wido den Hollander wrote: > On 01/16/2015 10:50 AM, Sebastien Han wrote: > > Hum if I understand correctly you?re all more in favour of a conf setting in the ceph.conf; > > The problem for me is that this will apply to all the pools by default and I?ll have to inject an arg to change this. > > Injecting the arg will remove this ?lock" and then all of the sudden all the pools become deletable through the lib again (who knows what users can do simultaneously) > > > > No, from what I understand it's easier to implement, not the better way. I'd like to do both, actually. :) > > I?m more in favour of a new flag to set to the pool, something like: > > > > ceph osd pool set foo protect true > > ceph osd pool delete foo foo ?yes?. > > ERROR: pool foo is protected against deletion > > > > ceph osd pool delete foo protect false > > ceph osd pool delete foo foo ?yes?. > > Pool successfully deleted > > > > Something like that per pool seems better to me as well. But I'd then > opt for a 'feature' which can be set on a pool. > > ceph osd pool set foo nodelete > ceph osd pool set foo nopgchange > ceph osd pool set foo nosizechange I like this since it fits into the current flags nicely. The downside is we don't grandfather existing pools on upgrade. Not sure if people think that's a good idea. > > The good thing with that is that owners of the pool (or admin), will be able to set this flag or remove it. > > We stick with the "ceph osd pool delete foo foo ?yes?.? command as well, so we don?t change too much things. > > > > Moreover we can also make use of a config option to protect all new created pools by default: > > > > mon protect pool default = true > > > > This automatically set the protected flag to a new pool. > > > > What do you think? > > > > Setting a nodelete flag or something like that by default is fine with > me. Like Sage mentioned earlier, almost nobody will have ephemeral pools > in their cluster. You don't want to loose data because you accidentally > removed a pool. We should mirror this option: OPTION(osd_pool_default_flag_hashpspool, OPT_BOOL, true) // use new pg hashing to prevent pool/pg overlap So, osd_pool_default_flag_nodelete = true osd_pool_default_flag_nopgchange = true osd_pool_default_flag_nosizechange = true The big question for me is should we enable these by default in hammer? sage > > Wido > > >> On 15 Jan 2015, at 18:24, Sage Weil <sage@xxxxxxxxxxxx> wrote: > >> > >> Then secondary question is whether the cluster should implicitly clear the > >> allow-delete after some time period (maybe 'pending-delete' would make > >> more sense in that case), or whether we deny IO during that period. Seems > >> perhaps too complicated. > > > > > > Cheers. > > ???? > > S?bastien Han > > Cloud Architect > > > > "Always give 100%. Unless you're giving blood." > > > > Phone: +33 (0)1 49 70 99 72 > > Mail: sebastien.han@xxxxxxxxxxxx > > Address : 11 bis, rue Roqu?pine - 75008 Paris > > Web : www.enovance.com - Twitter : @enovance > > > > > -- > Wido den Hollander > 42on B.V. > Ceph trainer and consultant > > Phone: +31 (0)20 700 9902 > Skype: contact42on > > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html