On Wed, Aug 7, 2019 at 11:46 AM Valentin Bajrami <valentin.bajrami@xxxxxxxxxxxxxxxxx> wrote: > > Hi everyone, > > When I started setting up a test cluster, I created a cephfs_data pool since I thought I was going to use the replicated type for the pool. This turned out not to be to most ideal choice when it comes to usable disk space. So I decided to create another erasure coded pool with k=2;m=2 which I did. Please see the attached log file. I also attached the output of the pools from the dashboard. > > The problem is this. Whenever an OSD needs to be brought down, the cluster is going to rebalance and enter the degraded mode. The rebalance of the cluster takes a while since all the PG's need to be redistributed. Now I could use something like: > > ceph osd set noout && ceph osd set norebalance > > However, I want to be able to have the cluster recovered much faster but cephfs_data pool which has a lot of PG's which I cannot reduce (not able in mimic) is keeping the cluster busy. This pool isn't used for storing any data and I am not sure if I can remove this pool without effecting the th-ec22 EC pool. > > Anyone any thoughs on this? Is the cephfs_data pool associated with the CephFS at all? From the "ceph df detail" it looks like it is, which means you unfortunately can't delete it — the "root" CephFS data pool stores backpointers for all the files in the system and those are needed to resolve hard links, support disaster recovery, and do a few other things. You can check this by looking at the FSMap in detail and seeing which pools are which. -Greg > > In case I provided insufficient or missing information, please let me know. I'd gladly share those with you. > > -- > Met vriendelijke groeten, Kind regards > > Valentin Bajrami > Target Holding > > _______________________________________________ > Dev mailing list -- dev@xxxxxxx > To unsubscribe send an email to dev-leave@xxxxxxx _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx