Re: Can I remove cephfs_data pool when using cephFS mode (mimic 13.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gregory,

Thanks for the quick reply. I'll have to wait until nautilus is
realeased so I can reduce the PGs on this cephfs_data pool. I thought
there was another way of making this recover / PG rebalance on the
cluster much faster. I was planning to perform the following actions to
change the structure of my cluster.

1) Move data from the  th-ec22 pool to a local storage and remove this
pool. Mainly because I want to move from crush-failure-domain=host  to 
crush-failure-domain=rack
2) Remove cephfs_data pool and re-create the pool with less PG's
3) Re-create th-ec22 pool with crush-failure-domain=rack  so I can lose
one Datacenter where my OSD's are hosted.
4) Move data from local storage back to th-ec22 pool

Is this the proper course of action or are there easier / better ways to
achieve this?

Thanks again!


On 8/7/19 9:43 PM, Gregory Farnum wrote:
> On Wed, Aug 7, 2019 at 11:46 AM Valentin Bajrami
> <valentin.bajrami@xxxxxxxxxxxxxxxxx> wrote:
>> Hi everyone,
>>
>> When I started setting up a test cluster, I created a cephfs_data pool since I thought I was going to use the replicated type for the pool. This turned out not to be to most ideal choice when it comes to usable disk space. So I decided to create another erasure coded pool with k=2;m=2 which I did. Please see the attached log file. I also attached the output of the pools from the dashboard.
>>
>> The problem is this. Whenever an OSD needs to be brought down, the cluster is going to rebalance and enter the degraded mode. The rebalance of the cluster takes a while since all the PG's need to be redistributed. Now I could use something like:
>>
>> ceph osd set noout && ceph osd set norebalance
>>
>> However, I want to be able to have the cluster recovered much faster but cephfs_data pool which has a lot of PG's which I cannot reduce (not able in mimic) is keeping the cluster busy. This pool isn't used for storing any data and I am not sure if I can remove this pool without effecting the th-ec22 EC pool.
>>
>> Anyone any thoughs on this?
> Is the cephfs_data pool associated with the CephFS at all? From the
> "ceph df detail" it looks like it is, which means you unfortunately
> can't delete it — the "root" CephFS data pool stores backpointers for
> all the files in the system and those are needed to resolve hard
> links, support disaster recovery, and do a few other things.
> You can check this by looking at the FSMap in detail and seeing which
> pools are which.
> -Greg
>
>> In case I provided insufficient or missing information, please let me know. I'd gladly share those with you.
>>
>> --
>> Met vriendelijke groeten, Kind regards
>>
>> Valentin Bajrami
>> Target Holding
>>
>> _______________________________________________
>> Dev mailing list -- dev@xxxxxxx
>> To unsubscribe send an email to dev-leave@xxxxxxx

-- 
Met vriendelijke groeten,

Valentin Bajrami
Target Holding 

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux