Re: Migrating a cephfs data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 28, 2019 at 5:41 PM Jorge Garcia <jgarcia@xxxxxxxxxxxx> wrote:
>
> Ok, actually, the problem was somebody writing to the filesystem. So I moved their files and got to 0 objects. But then I tried to remove the original data pool and got an error:
>
>   # ceph fs rm_data_pool cephfs cephfs-data
>   Error EINVAL: cannot remove default data pool
>
> So it seems I will never be able to remove the original data pool. I could leave it there as a ghost pool, which is not optimal, but I guess there's currently not a better option.

Yeah; CephFS writes its backtrace pointers (for inode-based lookups)
to the default data pool. Unfortunately we need all of those to live
in one known pool, and CephFS doesn't have a way to migrate them.
-Greg

>
> On 6/28/19 4:04 PM, Patrick Hein wrote:
>
> Afaik MDS doesn't delete the objects immediately but defer it for later. If you check that again now, how many objects does it report?
>
> Jorge Garcia <jgarcia@xxxxxxxxxxxx> schrieb am Fr., 28. Juni 2019, 23:16:
>>
>>
>> On 6/28/19 9:02 AM, Marc Roos wrote:
>> > 3. When everything is copied-removed, you should end up with an empty
>> > datapool with zero objects.
>>
>> I copied the data to a new directory and then removed the data from the
>> old directory, but df still reports some objects in the old pool (not
>> zero). Is there a way to track down what's still in the old pool, and
>> how to delete it?
>>
>> Before delete:
>>
>> # ceph df
>> GLOBAL:
>>      SIZE        AVAIL       RAW USED     %RAW USED
>>      392 TiB     389 TiB      3.3 TiB          0.83
>> POOLS:
>>      NAME            ID     USED        %USED     MAX AVAIL OBJECTS
>>      cephfs-meta  6       17 MiB         0       123 TiB 27
>>      cephfs-data   7      763 GiB      0.60       123 TiB 195233
>>      new-ec-pool  8      641 GiB      0.25       245 TiB 163991
>>
>> After delete:
>>
>> # ceph df
>> GLOBAL:
>>      SIZE        AVAIL       RAW USED     %RAW USED
>>      392 TiB     391 TiB      1.2 TiB          0.32
>> POOLS:
>>      NAME            ID     USED        %USED     MAX AVAIL OBJECTS
>>      cephfs-meta  6       26 MiB         0       124 TiB 29
>>      cephfs-data   7       83 GiB      0.07       124 TiB 21175
>>      new-ec-pool  8      641 GiB      0.25       247 TiB 163991
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux