Mimic EPERM doing rm pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



List,

I’ve just installed a new mimic cluster and wonder why I can’t remove a initial test pool like this:

[root@n1 ~]# ceph -s
  cluster:
    id:     2284bf30-a27e-4543-af8f-b2726207762a
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum n1,n2,n3
    mgr: n1.ceph(active), standbys: n2.ceph, n4.ceph
    mds: cfs-1/1/1 up  {0=n4.ceph=up:active}, 1 up:standby
    osd: 24 osds: 24 up, 24 in
 
  data:
    pools:   4 pools, 1544 pgs
    objects: 22 objects, 2.23KiB
    usage:   24.4GiB used, 3.18TiB / 3.20TiB avail
    pgs:     1544 active+clean


[root@n1 ~]# ceph tell mon.\* injectargs --mon-allow-pool-delete=true
mon.n1: injectargs:mon_allow_pool_delete = 'true' 
mon.n2: injectargs:mon_allow_pool_delete = 'true' 
mon.n3: injectargs:mon_allow_pool_delete = 'true' 

[root@n1 ~]# ceph osd pool rm mytestpool mytestpool --yes-i-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool mytestpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.


/Steffen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux