On Mon, Jul 25, 2016 at 11:01 AM, kelvin woo <kelwoo@xxxxxxxxx> wrote: > Hi All, > > # 1 # > I encountered 2 problems, I found that one of newly created ceph pool is > full that I do not know the reason. > > [root@ceph-adm ceph-cluster]# ceph -s > cluster 6dfd4779-3c75-49f4-bd47-6f4c31df0cb2 > health HEALTH_WARN > pool 'pool_cephfs' is full > monmap e1: 3 mons at > {ceph-mon1=172.19.7.83:6789/0,ceph-mon2=172.19.7.84:6789/0,ceph-mon3=172.19.7.85:6789/0} > election epoch 4, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3 > fsmap e39: 1/1/0 up {0=ceph-adm=up:active} > osdmap e126: 6 osds: 6 up, 6 in > flags sortbitwise > pgmap v508288: 274 pgs, 11 pools, 20172 kB data, 216 objects > 30997 MB used, 91816 MB / 119 GB avail > 274 active+clean > > # 2 # > Thus, I try to delete the problem pool ''pool_cephfs'', when I tried to > delete the pool, error pop up and warn me that "Error EBUSY: pool > 'pool_metadata_cephfs' is in use by CephFS". I search thru the mailing list > and internet. I has tried several solution but seem not work for me > including > > 1. ceph mds cluster_down > 2. systemctl stop ceph-mds@ceph-adm.service > 3. ceph fs rm cephfs --yes-i-really-mean-it > 4. ceph osd pool delete pool_metadata_cephfs pool_metadata_cephfs > --yes-i-really-really-mean-it Which step failed? You may need to use "ceph mds fail 0" after doing cluster_down. John > > Could anyone has idea about it? thanks for advise! > > Kelvin > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com