Re: data increase after multisite syncing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For now the storage new cluster used is more than old cluster whereas
the sync progress is about 50% to go. Besides, the amount of objects on
pool 'rgw.buckets.data' is larger too.

I'm not sure if the space of new cluster is enough or not for the whole
data.

Zhenshi Zhou <deaderzzs@xxxxxxxxx> 于2020年5月12日周二 上午11:40写道:

> Hi,
>
> I deployed a multisite in order to sync data from a mimic cluster zone
> to a nautilus cluster zone. The data sync well at present. However,
> I check the cluster status and I find something strange. The data in my
> new cluster seems larger than that in old ones. The data is far from full
> synced while the space used is nearly the same. Does that normal?
>
> 'ceph df ' on old cluster:
> GLOBAL:
>     SIZE       AVAIL      RAW USED     %RAW USED
>     82 TiB     41 TiB         41 TiB               50.37
> POOLS:
>     NAME                                   ID     USED        %USED
> MAX AVAIL     OBJECTS
>     .rgw.root                                1      6.0 KiB         0
>            10 TiB                19
>     default.rgw.control                 2          0 B           0
>          10 TiB                 8
>     default.rgw.meta                    3      3.5 KiB         0
>        10 TiB               19
>     default.rgw.log                       4      8.4 KiB         0
>          10 TiB              1500
>     default.rgw.buckets.index      5          0 B           0
>    10 TiB               889
>     default.rgw.buckets.non-ec    6          0 B           0
>    10 TiB               497
>     default.rgw.buckets.data        7       14 TiB      56.96
>  10 TiB           3968545
>     testpool                                  8          0 B           0
>                10 TiB                 0
>
> 'ceph df ' on new cluster:
> RAW STORAGE:
>     CLASS     SIZE        AVAIL      USED       RAW USED     %RAW USED
>     hdd       137 TiB     98 TiB     38 TiB       38 TiB         28.02
>     TOTAL     137 TiB     98 TiB     38 TiB       38 TiB         28.02
>
> POOLS:
>     POOL                                   ID     STORED      OBJECTS
> USED        %USED     MAX AVAIL
>     .rgw.root                                1       6.4 KiB
> 21           3.8 MiB            0             26 TiB
>     shubei.rgw.control               13         0 B                   8
>            0 B                0             26 TiB
>     shubei.rgw.meta                  14       4.1 KiB             20
>       3.2 MiB            0            26 TiB
>     shubei.rgw.log                     15       9.9 MiB          1.64k
>       47 MiB             0            26 TiB
>     default.rgw.meta                  16         0 B                  0
>             0 B                0            26 TiB
>     shubei.rgw.buckets.index     17      2.7 MiB           889
> 2.7 MiB            0            26 TiB
>     shubei.rgw.buckets.data      18       11 TiB            2.90M
>  33 TiB          29.37        26 TiB
>
> 'radosgw-admin sync status' on new cluster:
>           realm bde4bb56-fbca-4ef8-a979-935dbf109b78 (new-oriental)
>       zonegroup d25ae683-cdb8-4227-be45-ebaf0aed6050 (beijing)
>            zone 313c8244-fe4d-4d46-bf9b-0e33e46be041 (shubei)
>   metadata sync syncing
>                 full sync: 0/64 shards
>                 incremental sync: 64/64 shards
>                 metadata is caught up with master
>       data sync source: f70a5eb9-d88d-42fd-ab4e-d300e97094de (oldzone)
>                         syncing
>                         full sync: 106/128 shards
>                         full sync: 350 buckets to sync
>                         incremental sync: 22/128 shards
>                         data is behind on 115 shards
>                         behind shards:
> [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,23,24,25,26,27,28,29,30,32,35,37,38,39,40,41,42,43,44,45,46,47,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,96,97,98,99,100,101,102,103,104,105,107,108,109,110,111,112,113,114,116,118,119,120,121,122,123,124,125,126,127]
>                         oldest incremental change not applied: 2020-05-11
> 10:46:41.0.60179s [80]
>                         5 shards are recovering
>                         recovering shards: [21,31,95,104,106]
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux