Re: The cluster expands the osd, but the storage pool space becomes smaller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi, thank you very much, I will add this information

Currently 7 nodes are used to run a cluster:
node1: mon+mgr+mds
node2: mon+mgr+mds
osd1: mon+mgr+mds+osd
osd2: osd
osd3: osd
osd4: osd
osd5: osd

Each osd node is configured with 12*10T hdd, 1*1.5T nvme ssd, 150G*1 ssd;
Operating system: CentOS Linux release 8.2.2004 (Core)
Kernel: 4.18.0-193.el8.x86_64


12 HDDs share different partitions of the block.wal device.

NAME
                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

sda
                            8:0    0   9.1T  0 disk

└─ceph--dbb965c5--50ba--4b76--a54f--264539a8e222-osd--block--670f382a--a597--4cd3--81f2--fffe6c692081
253:20   0   9.1T  0 lvm

sdb
                            8:16   0   9.1T  0 disk

└─ceph--37dd9fd4--a7f9--42d6--8ef3--a664fff60f22-osd--block--437dfb29--bdc1--47ee--b783--3a02033e5c0d
253:5    0   9.1T  0 lvm

sdc
                            8:32   0   9.1T  0 disk

└─ceph--cdaf99a0--690f--4365--be3e--1a91cede017f-osd--block--ead5b7af--b0ff--403b--9eaf--c5e5d11c345c
253:21   0   9.1T  0 lvm

sdd
                            8:48   0   9.1T  0 disk

└─ceph--e9f599e9--7de6--4c2b--b56d--88500bbbcfb1-osd--block--562ddbd6--3611--468d--bf3a--70bb9344a132
253:19   0   9.1T  0 lvm

sde
                            8:64   0   9.1T  0 disk

└─ceph--b7f1ecf3--20cc--435b--8875--aad903388e24-osd--block--2bc3d36e--79f4--46a4--8b8d--ccab4c854c21
253:22   0   9.1T  0 lvm

sdf
                            8:80   0   9.1T  0 disk

└─ceph--541ca4b8--82d7--4c26--b59f--7f2ed9c2188f-osd--block--ec277364--734e--497a--b401--72e09027e204
253:1    0   9.1T  0 lvm

sdg
                            8:96   0   9.1T  0 disk

└─ceph--17bb7d8d--3812--4f51--b3c9--267169ea2582-osd--block--187f4c94--4319--4693--b149--987d7217664b
253:23   0   9.1T  0 lvm

sdh
                            8:112  0   9.1T  0 disk

└─ceph--873ebf5e--631c--47cb--b0fd--76f9a9161626-osd--block--dab9d202--cf9a--444a--862a--a5ea0245b157
253:25   0   9.1T  0 lvm

sdi
                            8:128  0   9.1T  0 disk

└─ceph--f3db05d8--b70c--4cd8--b554--ce6bd4d09e28-osd--block--438292c8--1e1b--4fe7--a643--dc0cb2920b0b
253:17   0   9.1T  0 lvm

sdj
                            8:144  0   9.1T  0 disk

└─ceph--650b591f--1ce3--47e4--9828--aee56dfd9330-osd--block--a779aed3--3c0a--4dff--a6f9--bc910007bb3d
253:24   0   9.1T  0 lvm

sdk
                            8:160  0   9.1T  0 disk

└─ceph--052884b1--2064--476f--956e--dafc4cc01807-osd--block--c67b0c5c--3b69--42a6--a17d--b195cc156075
253:18   0   9.1T  0 lvm

sdl
                            8:176  0   9.1T  0 disk

└─ceph--760f7b36--39c2--45ed--9a03--0b7a629b984d-osd--block--2328a69c--9bdd--4f44--8fae--7385c1615880
253:16   0   9.1T  0 lvm

sdm
                            8:192  0   100G  0 disk

├─sdm1
                            8:193  0     2G  0 part /boot

└─sdm2
                            8:194  0    98G  0 part

  └─cl-root
                          253:0    0    98G  0 lvm  /

sdn
                            8:208  0 132.4G  0 disk

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--d3ac416d--c207--422b--b8bb--803a1ba1a9e1
  253:2    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--cd921400--a472--4db2--bbcc--aa62ecc9b2da
  253:3    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--24dd25a6--3caf--4f63--8738--98c9b12e0e8d
  253:4    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--7b83d1e4--51c9--41e9--ba00--9fd7be43a058
  253:6    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--81d2894e--c165--4399--9a03--379a5868dab8
  253:7    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--22e0d346--e7ba--44cb--b762--fb3a602ef317
  253:8    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--0a80716a--2b63--47ed--ad3c--e197ba43c7ed
  253:9    0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--4446ce0b--2cba--44db--87e9--d3b303545b68
  253:10   0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--449ac9c2--25d0--4d92--88e2--136e9ccbf33a
  253:11   0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--1d8d5262--92b5--4dc6--b0eb--03f206cdb597
  253:12   0    11G  0 lvm

├─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--23baf718--fe1a--4888--aadf--b0c694b49dd9
  253:13   0    11G  0 lvm

└─ceph--f9db1932--d933--4399--b295--d0cbea712884-osd--wal--a531a982--68ee--474d--b4c5--4c01255dc083
  253:14   0    11G  0 lvm

nvme0n1
                          259:0    0   1.5T  0 disk

└─ceph--bcbab2bb--7966--4925--98a2--c1e9b9b5af67-osd--block--91a0cf38--bba8--4edf--90d5--2822cd36e6e2
253:15   0   1.5T  0 lvm

The storage pool fsdata is on the hdd device, using erasure codes k2 and m1;
The ssd device creates a copy pool to be used as a cache layer;
size: 3
min_size: 2
pg_num: 8192
pgp_num: 8192
crush_rule: ec-hdd
hashpspool: true
allow_ec_overwrites: true
nodelete: false
nopgchange: false
nosizechange: false
write_fadvise_dontneed: false
noscrub: false
nodeep-scrub: false
use_gmt_hitset: 1
erasure_code_profile: ec-2-1
fast_read: 1
pg_autoscale_mode: off


Now we are ready to expand the cluster to 2 nodes.

The OS version is updated to CentOS Linux release 8.4.2105
The kernel is 4.18.0-305.10.2.el8_4.x86_64
The software version remains at 15.2.10

Each node is configured with 12*12T hdd and 2*1.2T nvme ssd.

One of the ssd is used as block.db partition this time, each partition is
30G.

Add a new osd through the ceph-volume command.

NAME
                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

sda
                            8:0    0  10.9T  0 disk

└─ceph--e4169acc--bb65--4070--a015--5354875e81b9-osd--block--b62b82a8--1bef--41dd--a14f--bd4c2adf39de
253:3    0  10.9T  0 lvm

sdb
                            8:16   0  10.9T  0 disk

└─ceph--08ff2f34--4533--4319--8cbc--02948ce8c784-osd--block--e2a0f40e--5c74--43fc--81a0--ea686086cb33
253:6    0  10.9T  0 lvm

sdc
                            8:32   0  10.9T  0 disk

sdd
                            8:48   0  10.9T  0 disk

sde
                            8:64   0  10.9T  0 disk

sdf
                            8:80   0  10.9T  0 disk

sdg
                            8:96   0  10.9T  0 disk

sdh
                            8:112  0  10.9T  0 disk

sdi
                            8:128  0  10.9T  0 disk

sdj
                            8:144  0  10.9T  0 disk

sdk
                            8:160  0  10.9T  0 disk

sdl
                            8:176  0  10.9T  0 disk

sdm
                            8:192  0 465.3G  0 disk

├─sdm1
                            8:193  0     2G  0 part /boot

└─sdm2
                            8:194  0 463.3G  0 part

  ├─cl-root
                          253:0    0   100G  0 lvm  /

  ├─cl-swap
                          253:1    0   128G  0 lvm  [SWAP]

  ├─cl-swap00
                          253:2    0   128G  0 lvm  [SWAP]

  └─cl-var
                          253:5    0 107.2G  0 lvm  /var

nvme0n1
                          259:0    0   1.1T  0 disk

├─ceph--a9a0d99c--9405--484f--8396--44179c8a698e-osd--db--7b7d9347--23ef--41ad--b2a7--37ea38d8e9c9
  253:4    0    30G  0 lvm

└─ceph--a9a0d99c--9405--484f--8396--44179c8a698e-osd--db--f1e39aa1--80d9--4a74--9877--23bf558f6e2a
  253:7    0    30G  0 lvm

nvme1n1
                          259:1    0   1.1T  0 disk





At present, I have out of the newly added osd, and the cluster size will be
restored;
For example, when the normal 320T is marked as in, the storage pool size is
only 300T.

ID   CLASS  WEIGHT     TYPE NAME               STATUS  REWEIGHT  PRI-AFF

 -1         596.79230  root default

 -9         110.60374      host osd1

  1    hdd    9.09569          osd.1               up   1.00000  1.00000

  7    hdd    9.09569          osd.7               up   1.00000  1.00000

 12    hdd    9.09569          osd.12              up   1.00000  1.00000

 17    hdd    9.09569          osd.17              up   1.00000  1.00000

 22    hdd    9.09569          osd.22              up   1.00000  1.00000

 27    hdd    9.09569          osd.27              up   1.00000  1.00000

 32    hdd    9.09569          osd.32              up   1.00000  1.00000

 37    hdd    9.09569          osd.37              up   1.00000  1.00000

 42    hdd    9.09569          osd.42              up   1.00000  1.00000

 47    hdd    9.09569          osd.47              up   1.00000  1.00000

 52    hdd    9.09569          osd.52              up   1.00000  1.00000

 57    hdd    9.09569          osd.57              up   1.00000  1.00000

 60    ssd    1.45549          osd.60              up   1.00000  1.00000

 -3         110.60374      host osd2

  0    hdd    9.09569          osd.0               up   1.00000  1.00000

  5    hdd    9.09569          osd.5               up   1.00000  1.00000

 10    hdd    9.09569          osd.10              up   1.00000  1.00000

 15    hdd    9.09569          osd.15              up   1.00000  1.00000

 20    hdd    9.09569          osd.20              up   1.00000  1.00000

 25    hdd    9.09569          osd.25              up   1.00000  1.00000

 30    hdd    9.09569          osd.30              up   1.00000  1.00000

 35    hdd    9.09569          osd.35              up   1.00000  1.00000

 40    hdd    9.09569          osd.40              up   1.00000  1.00000

 45    hdd    9.09569          osd.45              up   1.00000  1.00000

 50    hdd    9.09569          osd.50              up   1.00000  1.00000

 55    hdd    9.09569          osd.55              up   1.00000  1.00000

 61    ssd    1.45549          osd.61              up   1.00000  1.00000

 -5         110.60374      host osd3

  2    hdd    9.09569          osd.2               up   1.00000  1.00000

  6    hdd    9.09569          osd.6               up   1.00000  1.00000

 11    hdd    9.09569          osd.11              up   1.00000  1.00000

 16    hdd    9.09569          osd.16              up   1.00000  1.00000

 21    hdd    9.09569          osd.21              up   1.00000  1.00000

 26    hdd    9.09569          osd.26              up   1.00000  1.00000

 31    hdd    9.09569          osd.31              up   1.00000  1.00000

 36    hdd    9.09569          osd.36              up   1.00000  1.00000

 41    hdd    9.09569          osd.41              up   1.00000  1.00000

 46    hdd    9.09569          osd.46              up   1.00000  1.00000

 51    hdd    9.09569          osd.51              up   1.00000  1.00000

 56    hdd    9.09569          osd.56              up   1.00000  1.00000

 62    ssd    1.45549          osd.62              up   1.00000  1.00000

 -7         110.60374      host osd4

  3    hdd    9.09569          osd.3               up   1.00000  1.00000

  8    hdd    9.09569          osd.8               up   1.00000  1.00000

 13    hdd    9.09569          osd.13              up   1.00000  1.00000

 18    hdd    9.09569          osd.18              up   1.00000  1.00000

 23    hdd    9.09569          osd.23              up   1.00000  1.00000

 28    hdd    9.09569          osd.28              up   1.00000  1.00000

 33    hdd    9.09569          osd.33              up   1.00000  1.00000

 38    hdd    9.09569          osd.38              up   1.00000  1.00000

 43    hdd    9.09569          osd.43              up   1.00000  1.00000

 48    hdd    9.09569          osd.48              up   1.00000  1.00000

 53    hdd    9.09569          osd.53              up   1.00000  1.00000

 58    hdd    9.09569          osd.58              up   1.00000  1.00000

 63    ssd    1.45549          osd.63              up   1.00000  1.00000

-11         110.60374      host osd5

  4    hdd    9.09569          osd.4               up   1.00000  1.00000

  9    hdd    9.09569          osd.9               up   1.00000  1.00000

 14    hdd    9.09569          osd.14              up   1.00000  1.00000

 19    hdd    9.09569          osd.19              up   1.00000  1.00000

 24    hdd    9.09569          osd.24              up   1.00000  1.00000

 29    hdd    9.09569          osd.29              up   1.00000  1.00000

 34    hdd    9.09569          osd.34              up   1.00000  1.00000

 39    hdd    9.09569          osd.39              up   1.00000  1.00000

 44    hdd    9.09569          osd.44              up   1.00000  1.00000

 49    hdd    9.09569          osd.49              up   1.00000  1.00000

 54    hdd    9.09569          osd.54              up   1.00000  1.00000

 59    hdd    9.09569          osd.59              up   1.00000  1.00000

 64    ssd    1.45549          osd.64              up   1.00000  1.00000

-19          21.88678      host osd6

 65    hdd   10.94339          osd.65              up         0  1.00000

-22          21.88678      host osd7

 66    hdd   10.94339          osd.66              up         0  1.00000

Eneko Lacunza <elacunza@xxxxxxxxx> 于2021年8月11日周三 下午2:35写道:

> Hi David,
>
> You need to provide the details for each node; OSDs with their size and
> pool configuration.
>
> El 11/8/21 a las 5:30, David Yang escribió:
> > There is also a set of mon+mgr+mds running on one of the storage nodes.
> > David Yang <gmydw1118@xxxxxxxxx> 于2021年8月11日周三 上午11:24写道:
> >
> >> hi
> >> I have a cluster of 5 storage nodes + 2 (mon+mds+mgr) nodes for file
> >> system storage. The usage is very good.
> >>
> >> The cluster is now being expanded by adding storage nodes.
> >>
> >> But when the data was backfilled, I found that the total space of the
> >> storage pool was decreasing.
> >>
> >> I had to mark the newly added osd as out, and the data pool space was
> >> restored to the size before the expansion.
> >>
> >> Can anyone help me, thanks
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
>
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux