Re: 12.2.7 - Available space decreasing when adding disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Something funny going on with your  new disks:


138   ssd 0.90970  1.00000  931G  820G  111G 88.08 2.71 216 Added
139   ssd 0.90970  1.00000  931G  771G  159G 82.85 2.55 207 Added
140   ssd 0.90970  1.00000  931G  709G  222G 76.12 2.34 197 Added
141   ssd 0.90970  1.00000  931G  664G  267G 71.31 2.19 184 Added

The last 3 columns are: % used, variation, and PG count. These 4 have much higher %used and PG count than the rest, almost double. You probably have these disks in multiple pools and therefore have too many PGs on them.


One of them is at 88% used. The max available capacity of a pool is calculated based on the most full OSD in it, which is why your total available capacity drops to 0.6TB. 


From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Sent: Saturday, 21 July 2018 10:43:16 AM
To: ceph-users
Subject: 12.2.7 - Available space decreasing when adding disks
 

Hello Ceph Users,

 

We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the SSD pool ).

 

I would assume that the weight needs to be changed but I didn’t think I would need to? Should I change them to 0.75 from 0.9 and hopefully it will rebalance correctly?

 

#ceph osd tree | grep -v hdd

ID  CLASS WEIGHT    TYPE NAME                     STATUS REWEIGHT PRI-AFF

-1       534.60309 root default

-19        62.90637     host NAS-AUBUN-RK2-CEPH06

115   ssd   0.43660         osd.115                   up  1.00000 1.00000

116   ssd   0.43660         osd.116                   up  1.00000 1.00000

117   ssd   0.43660         osd.117                   up  1.00000 1.00000

118   ssd   0.43660         osd.118                   up  1.00000 1.00000

-22       105.51169     host NAS-AUBUN-RK2-CEPH07

138   ssd   0.90970         osd.138                   up  1.00000 1.00000 Added

139   ssd   0.90970         osd.139                   up  1.00000 1.00000 Added

-25       105.51169     host NAS-AUBUN-RK2-CEPH08

140   ssd   0.90970         osd.140                   up  1.00000 1.00000 Added

141   ssd   0.90970         osd.141                   up  1.00000 1.00000 Added

-3        56.32617     host NAS-AUBUN-RK3-CEPH01

60   ssd   0.43660         osd.60                    up  1.00000 1.00000

61   ssd   0.43660         osd.61                    up  1.00000 1.00000

62   ssd   0.43660         osd.62                    up  1.00000 1.00000

63   ssd   0.43660         osd.63                    up  1.00000 1.00000

-5        56.32617     host NAS-AUBUN-RK3-CEPH02

64   ssd   0.43660         osd.64                    up  1.00000 1.00000

65   ssd   0.43660         osd.65                    up  1.00000 1.00000

66   ssd   0.43660         osd.66                    up  1.00000 1.00000

67   ssd   0.43660         osd.67                    up  1.00000 1.00000

-7        56.32617     host NAS-AUBUN-RK3-CEPH03

68   ssd   0.43660         osd.68                    up  1.00000 1.00000

69   ssd   0.43660         osd.69                    up  1.00000 1.00000

70   ssd   0.43660         osd.70                    up  1.00000 1.00000

71   ssd   0.43660         osd.71                    up  1.00000 1.00000

-13        45.84741     host NAS-AUBUN-RK3-CEPH04

72   ssd   0.54579         osd.72                    up  1.00000 1.00000

73   ssd   0.54579         osd.73                    up  1.00000 1.00000

76   ssd   0.54579         osd.76                    up  1.00000 1.00000

77   ssd   0.54579         osd.77                    up  1.00000 1.00000

-16        45.84741     host NAS-AUBUN-RK3-CEPH05

74   ssd   0.54579         osd.74                    up  1.00000 1.00000

75   ssd   0.54579         osd.75                    up  1.00000 1.00000

78   ssd   0.54579         osd.78                    up  1.00000 1.00000

79   ssd   0.54579         osd.79                    up  1.00000 1.00000

 

# ceph osd df | grep -v hdd

ID  CLASS WEIGHT  REWEIGHT SIZE  USE   AVAIL %USE  VAR  PGS

115   ssd 0.43660  1.00000  447G  250G  196G 56.00 1.72 103

116   ssd 0.43660  1.00000  447G  191G  255G 42.89 1.32  84

117   ssd 0.43660  1.00000  447G  213G  233G 47.79 1.47  92

118   ssd 0.43660  1.00000  447G  208G  238G 46.61 1.43  85

138   ssd 0.90970  1.00000  931G  820G  111G 88.08 2.71 216 Added

139   ssd 0.90970  1.00000  931G  771G  159G 82.85 2.55 207 Added

140   ssd 0.90970  1.00000  931G  709G  222G 76.12 2.34 197 Added

141   ssd 0.90970  1.00000  931G  664G  267G 71.31 2.19 184 Added

60   ssd 0.43660  1.00000  447G  275G  171G 61.62 1.89 100

61   ssd 0.43660  1.00000  447G  237G  209G 53.04 1.63  90

62   ssd 0.43660  1.00000  447G  275G  171G 61.58 1.89  95

63   ssd 0.43660  1.00000  447G  260G  187G 58.15 1.79  97

64   ssd 0.43660  1.00000  447G  232G  214G 52.08 1.60  83

65   ssd 0.43660  1.00000  447G  207G  239G 46.36 1.42  75

66   ssd 0.43660  1.00000  447G  217G  230G 48.54 1.49  84

67   ssd 0.43660  1.00000  447G  252G  195G 56.36 1.73  92

68   ssd 0.43660  1.00000  447G  248G  198G 55.56 1.71  94

69   ssd 0.43660  1.00000  447G  229G  217G 51.25 1.57  84

70   ssd 0.43660  1.00000  447G  259G  187G 58.01 1.78  87

71   ssd 0.43660  1.00000  447G  267G  179G 59.83 1.84  97

72   ssd 0.54579  1.00000  558G  217G  341G 38.96 1.20 100

73   ssd 0.54579  1.00000  558G  283G  275G 50.75 1.56 121

76   ssd 0.54579  1.00000  558G  286G  272G 51.33 1.58 129

77   ssd 0.54579  1.00000  558G  246G  312G 44.07 1.35 104

74   ssd 0.54579  1.00000  558G  273G  285G 48.91 1.50 122

75   ssd 0.54579  1.00000  558G  281G  276G 50.45 1.55 114

78   ssd 0.54579  1.00000  558G  289G  269G 51.80 1.59 133

79   ssd 0.54579  1.00000  558G  276G  282G 49.39 1.52 119

Kind regards,

Glen Baars

BackOnline Manager

 

This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux