Re: [Octopus] Beware the on-disk conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

A simple fsck eats the same amount of memory

Cluster usage: rbd with a bit of rgw

Here is the ceph df detail
All OSDs are single rusty devices

On 4/2/20 2:19 PM, Igor Fedotov wrote:
> Hi Jack,
> 
> could you please try the following - stop one of already converted OSDs
> and do a quick-fix/fsck/repair against it using ceph_bluestore_tool:
> 
> ceph-bluestore-tool --path <path to osd> --command quick-fix|fsck|repair
> 
> Does it cause similar memory usage?
> 
> You can stop experimenting if quick-fix reproduces the issue.
> 
> 
> Also could you please describe your cluster and its usage a bit: what's
> the usage: rgw/rbd/cephfs? If possible - please share 'ceph df detail'
> output, do you have standalone DB volume at SSD/NVMe?
> 
> Thanks,
> 
> Igor
> 
> 
> On 4/1/2020 6:28 PM, Jack wrote:
>> Hi,
>>
>> As the upgrade documentation tells:
>>> Note that the first time each OSD starts, it will do a format
>>> conversion to improve the accounting for “omap” data. This may
>>> take a few minutes to as much as a few hours (for an HDD with lots
>>> of omap data). You can disable this automatic conversion with:
>> What the documentation does not say is that this process takes a lot of
>> memory
>>
>> I am upgrading a rusty cluster from Nautilus, you can check out the ram
>> consumption as attachment
>>
>> First, we have a 3TB osd conversion: it tooks ~15min, and 19GB of memory
>>
>> Then, we have a larger 6TB osd conversion: it tooks more than 2 hours,
>> and 35GB of memory
>>
>> Finally, you have the largest 10TB osd: only 1H15, but 52GB of memory
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx

root@backup1:~# ceph df detail
--- RAW STORAGE ---
CLASS  SIZE     AVAIL   USED     RAW USED  %RAW USED
hdd    226 TiB  41 TiB  183 TiB   184 TiB      81.74
TOTAL  226 TiB  41 TiB  183 TiB   184 TiB      81.74
 
--- POOLS ---
POOL                        ID  STORED   (DATA)   (OMAP)   OBJECTS  USED     (DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY    USED COMPR  UNDER COMPR
rbd                          1  245 TiB  245 TiB  9.0 MiB   50.26M  151 TiB  151 TiB  9.0 MiB  90.03     12 TiB  N/A            N/A           50.26M      35 TiB      144 TiB
.rgw.root                   18  1.4 KiB  1.4 KiB      0 B        5  512 KiB  512 KiB      0 B      0     10 TiB  N/A            N/A                5         0 B          0 B
default.rgw.control         19      0 B      0 B      0 B        7      0 B      0 B      0 B      0     12 TiB  N/A            N/A                7         0 B          0 B
default.rgw.meta            20   35 KiB   31 KiB  3.7 KiB      129   13 MiB   13 MiB  3.7 KiB      0     12 TiB  N/A            N/A              129         0 B          0 B
default.rgw.log             21   77 MiB      0 B   77 MiB      230   77 MiB      0 B   77 MiB      0     11 TiB  N/A            N/A              230         0 B          0 B
default.rgw.buckets.index   22   56 MiB      0 B   56 MiB       26   56 MiB      0 B   56 MiB      0     12 TiB  N/A            N/A               26         0 B          0 B
default.rgw.buckets.data    23  205 GiB  205 GiB      0 B  244.25k  241 GiB  241 GiB      0 B   1.39     13 TiB  N/A            N/A          244.25k      50 GiB      101 GiB
default.rgw.buckets.non-ec  24  7.4 KiB      0 B  7.4 KiB       18  7.4 KiB      0 B  7.4 KiB      0     11 TiB  N/A            N/A               18         0 B          0 B
borg                        29   19 TiB   19 TiB    206 B    3.70M   28 TiB   28 TiB    206 B  62.26     12 TiB  N/A            N/A            3.70M     100 GiB      199 GiB
device_health_metrics       32  1.5 MiB      0 B  1.5 MiB       38  1.5 MiB      0 B  1.5 MiB      0    8.4 TiB  N/A            N/A               38         0 B          0 B
archive                     36  3.2 TiB  3.2 TiB    180 B  586.46k  4.5 TiB  4.5 TiB    180 B  21.01     12 TiB  N/A            N/A          586.46k     9.9 GiB       20 GiB

root@backup1:~# ceph osd tree
ID  CLASS  WEIGHT     TYPE NAME         STATUS  REWEIGHT  PRI-AFF
-1         225.58281  root default                               
-3          63.67560      host backup2                           
 0    hdd    5.45799          osd.0       down   1.00000  1.00000
 1    hdd    5.45799          osd.1       down   1.00000  1.00000
 2    hdd    5.45799          osd.2       down   1.00000  1.00000
 3    hdd    5.45799          osd.3       down   1.00000  1.00000
 4    hdd    5.45799          osd.4       down   1.00000  1.00000
 5    hdd    5.45799          osd.5       down   1.00000  1.00000
 6    hdd    5.45799          osd.6       down   1.00000  1.00000
 7    hdd    5.45799          osd.7       down   1.00000  1.00000
16    hdd    2.72899          osd.16      down   1.00000  1.00000
17    hdd    2.72899          osd.17      down   1.00000  1.00000
20    hdd    9.09569          osd.20      down   1.00000  1.00000
22    hdd    5.45799          osd.22      down   1.00000  1.00000
-5          69.13359      host backup3                           
 8    hdd    5.45799          osd.8         up   1.00000  1.00000
 9    hdd    5.45799          osd.9         up   1.00000  1.00000
10    hdd    5.45799          osd.10        up   1.00000  1.00000
11    hdd    5.45799          osd.11        up   1.00000  1.00000
12    hdd    5.45799          osd.12        up   1.00000  1.00000
13    hdd    5.45799          osd.13        up   1.00000  1.00000
14    hdd    5.45799          osd.14        up   1.00000  1.00000
15    hdd    5.45799          osd.15        up   1.00000  1.00000
18    hdd    5.45798          osd.18        up   1.00000  1.00000
21    hdd    9.09569          osd.21        up   1.00000  1.00000
23    hdd    5.45799          osd.23        up   1.00000  1.00000
37    hdd    5.45799          osd.37        up   1.00000  1.00000
-7          41.84013      host backup4                           
24    hdd    9.09520          osd.24        up   1.00000  1.00000
25    hdd    5.45749          osd.25        up   1.00000  1.00000
26    hdd    5.45749          osd.26        up   1.00000  1.00000
27    hdd    5.45749          osd.27        up   1.00000  1.00000
28    hdd    5.45749          osd.28        up   1.00000  1.00000
29    hdd    5.45749          osd.29        up   1.00000  1.00000
30    hdd    5.45749          osd.30        up   1.00000  1.00000
-9          50.93349      host backup5                           
31    hdd    9.09520          osd.31        up   1.00000  1.00000
32    hdd    9.09520          osd.32        up   1.00000  1.00000
33    hdd    9.09520          osd.33        up   1.00000  1.00000
34    hdd    9.09520          osd.34        up   1.00000  1.00000
35    hdd    9.09520          osd.35        up   1.00000  1.00000
36    hdd    5.45749          osd.36        up   1.00000  1.00000
root@backup2:~# l /var/lib/ceph/osd/ceph-0/
total 32K
drwxrwxrwt  2 ceph ceph 220 avril  1 05:04 .
drwxr-xr-x 14 ceph ceph 178 août  13  2018 ..
lrwxrwxrwx  1 ceph ceph  50 sept. 26  2019 block -> /dev/mapper/K2BrG7-Sw36-K8yw-teK5-DP1D-xsWU-6O7seU
-rw-------  1 ceph ceph  37 sept. 26  2019 ceph_fsid
-rw-------  1 ceph ceph  37 sept. 26  2019 fsid
-rw-------  1 ceph ceph  55 sept. 26  2019 keyring
-rw-------  1 ceph ceph 106 sept. 26  2019 lockbox.keyring
-rw-------  1 ceph ceph   6 sept. 26  2019 ready
-rw-------  1 ceph ceph   3 avril  1 05:04 require_osd_release
-rw-------  1 ceph ceph  10 sept. 26  2019 type
-rw-------  1 ceph ceph   2 sept. 26  2019 whoami
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux