Re: The cluster expands the osd, but the storage pool space becomes smaller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey David,

In case this wasn't answered off list already:

It looks like you have only added a single OSD to each new host?
You specified 12*10T on osd{1..5}, and 12*12T on osd{6,7}.

Just as a word of caution, the added 24T is more or less going to be wasted on osd{6,7} assuming that your crush ruleset is to use host as your failure domain.
But that is beside the point near term.

The problem is that your new OSD host buckets are really lopsided in the opposite direction, because now you have two tiny buckets, with 5 large buckets.

So what I would suggest is to set the noreblance, nobackfill, norecover flags on your cluster.
Then finish adding all of the OSDs to the two new hosts.

Then unset the no* flags, and let everything rebalance at that point.
Crush is going to try to satisfy the ruleset, which is to place data in host failure domains, and two of those failure domains are ~10% the size of the others, and it is going to try to evenly distribute across hosts, effectively making your smallest bucket, your measuring stick for cephfs fullness.

So if you try and bring everything up all at once, and feel free to throttle backfills as needed, it should increase your usable space as expected.

Hope that helps,
Reed

> On Aug 11, 2021, at 2:24 AM, David Yang <gmydw1118@xxxxxxxxx> wrote:
> 
> Each osd node is configured with 12*10T hdd, 1*1.5T nvme ssd, 150G*1 ssd;
> 
> Now we are ready to expand the cluster to 2 nodes.
> 
> Each node is configured with 12*12T hdd and 2*1.2T nvme ssd.
> 
> At present, I have out of the newly added osd, and the cluster size will be
> restored;
> For example, when the normal 320T is marked as in, the storage pool size is
> only 300T.
> 
> ID   CLASS  WEIGHT     TYPE NAME               STATUS  REWEIGHT  PRI-AFF
> -1         596.79230  root default
> -9         110.60374      host osd1
>  1    hdd    9.09569          osd.1               up   1.00000  1.00000
>  7    hdd    9.09569          osd.7               up   1.00000  1.00000
> 12    hdd    9.09569          osd.12              up   1.00000  1.00000
> 17    hdd    9.09569          osd.17              up   1.00000  1.00000
> 22    hdd    9.09569          osd.22              up   1.00000  1.00000
> 27    hdd    9.09569          osd.27              up   1.00000  1.00000
> 32    hdd    9.09569          osd.32              up   1.00000  1.00000
> 37    hdd    9.09569          osd.37              up   1.00000  1.00000
> 42    hdd    9.09569          osd.42              up   1.00000  1.00000
> 47    hdd    9.09569          osd.47              up   1.00000  1.00000
> 52    hdd    9.09569          osd.52              up   1.00000  1.00000
> 57    hdd    9.09569          osd.57              up   1.00000  1.00000
> 60    ssd    1.45549          osd.60              up   1.00000  1.00000
> -3         110.60374      host osd2
>  0    hdd    9.09569          osd.0               up   1.00000  1.00000
>  5    hdd    9.09569          osd.5               up   1.00000  1.00000
> 10    hdd    9.09569          osd.10              up   1.00000  1.00000
> 15    hdd    9.09569          osd.15              up   1.00000  1.00000
> 20    hdd    9.09569          osd.20              up   1.00000  1.00000
> 25    hdd    9.09569          osd.25              up   1.00000  1.00000
> 30    hdd    9.09569          osd.30              up   1.00000  1.00000
> 35    hdd    9.09569          osd.35              up   1.00000  1.00000
> 40    hdd    9.09569          osd.40              up   1.00000  1.00000
> 45    hdd    9.09569          osd.45              up   1.00000  1.00000
> 50    hdd    9.09569          osd.50              up   1.00000  1.00000
> 55    hdd    9.09569          osd.55              up   1.00000  1.00000
> 61    ssd    1.45549          osd.61              up   1.00000  1.00000
> -5         110.60374      host osd3
>  2    hdd    9.09569          osd.2               up   1.00000  1.00000
>  6    hdd    9.09569          osd.6               up   1.00000  1.00000
> 11    hdd    9.09569          osd.11              up   1.00000  1.00000
> 16    hdd    9.09569          osd.16              up   1.00000  1.00000
> 21    hdd    9.09569          osd.21              up   1.00000  1.00000
> 26    hdd    9.09569          osd.26              up   1.00000  1.00000
> 31    hdd    9.09569          osd.31              up   1.00000  1.00000
> 36    hdd    9.09569          osd.36              up   1.00000  1.00000
> 41    hdd    9.09569          osd.41              up   1.00000  1.00000
> 46    hdd    9.09569          osd.46              up   1.00000  1.00000
> 51    hdd    9.09569          osd.51              up   1.00000  1.00000
> 56    hdd    9.09569          osd.56              up   1.00000  1.00000
> 62    ssd    1.45549          osd.62              up   1.00000  1.00000
> -7         110.60374      host osd4
>  3    hdd    9.09569          osd.3               up   1.00000  1.00000
>  8    hdd    9.09569          osd.8               up   1.00000  1.00000
> 13    hdd    9.09569          osd.13              up   1.00000  1.00000
> 18    hdd    9.09569          osd.18              up   1.00000  1.00000
> 23    hdd    9.09569          osd.23              up   1.00000  1.00000
> 28    hdd    9.09569          osd.28              up   1.00000  1.00000
> 33    hdd    9.09569          osd.33              up   1.00000  1.00000
> 38    hdd    9.09569          osd.38              up   1.00000  1.00000
> 43    hdd    9.09569          osd.43              up   1.00000  1.00000
> 48    hdd    9.09569          osd.48              up   1.00000  1.00000
> 53    hdd    9.09569          osd.53              up   1.00000  1.00000
> 58    hdd    9.09569          osd.58              up   1.00000  1.00000
> 63    ssd    1.45549          osd.63              up   1.00000  1.00000
> -11         110.60374      host osd5
>  4    hdd    9.09569          osd.4               up   1.00000  1.00000
>  9    hdd    9.09569          osd.9               up   1.00000  1.00000
> 14    hdd    9.09569          osd.14              up   1.00000  1.00000
> 19    hdd    9.09569          osd.19              up   1.00000  1.00000
> 24    hdd    9.09569          osd.24              up   1.00000  1.00000
> 29    hdd    9.09569          osd.29              up   1.00000  1.00000
> 34    hdd    9.09569          osd.34              up   1.00000  1.00000
> 39    hdd    9.09569          osd.39              up   1.00000  1.00000
> 44    hdd    9.09569          osd.44              up   1.00000  1.00000
> 49    hdd    9.09569          osd.49              up   1.00000  1.00000
> 54    hdd    9.09569          osd.54              up   1.00000  1.00000
> 59    hdd    9.09569          osd.59              up   1.00000  1.00000
> 64    ssd    1.45549          osd.64              up   1.00000  1.00000
> -19          21.88678      host osd6
> 65    hdd   10.94339          osd.65              up         0  1.00000
> -22          21.88678      host osd7
> 66    hdd   10.94339          osd.66              up         0  1.00000

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux