Re: Adding multiple OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05. des. 2017 00:14, Karun Josy wrote:
Thank you for detailed explanation!

Got one another doubt,

This is the total space available in the cluster :

TOTAL : 23490G
Use  : 10170G
Avail : 13320G


But ecpool shows max avail as just 3 TB. What am I missing ?

==========


$ ceph df
GLOBAL:
     SIZE       AVAIL      RAW USED     %RAW USED
     23490G     13338G       10151G         43.22
POOLS:
     NAME            ID     USED      %USED     MAX AVAIL     OBJECTS
     ostemplates     1       162G      2.79         1134G       42084
     imagepool       34      122G      2.11         1891G       34196
     cvm1            54      8058         0         1891G         950
     ecpool1         55     4246G     42.77         3546G     1232590


$ ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS
  0   ssd 1.86469  1.00000  1909G   625G  1284G 32.76 0.76 201
  1   ssd 1.86469  1.00000  1909G   691G  1217G 36.23 0.84 208
  2   ssd 0.87320  1.00000   894G   587G   306G 65.67 1.52 156
11   ssd 0.87320  1.00000   894G   631G   262G 70.68 1.63 186
  3   ssd 0.87320  1.00000   894G   605G   288G 67.73 1.56 165
14   ssd 0.87320  1.00000   894G   635G   258G 71.07 1.64 177
  4   ssd 0.87320  1.00000   894G   419G   474G 46.93 1.08 127
15   ssd 0.87320  1.00000   894G   373G   521G 41.73 0.96 114
16   ssd 0.87320  1.00000   894G   492G   401G 55.10 1.27 149
  5   ssd 0.87320  1.00000   894G   288G   605G 32.25 0.74  87
  6   ssd 0.87320  1.00000   894G   342G   551G 38.28 0.88 102
  7   ssd 0.87320  1.00000   894G   300G   593G 33.61 0.78  93
22   ssd 0.87320  1.00000   894G   343G   550G 38.43 0.89 104
  8   ssd 0.87320  1.00000   894G   267G   626G 29.90 0.69  77
  9   ssd 0.87320  1.00000   894G   376G   518G 42.06 0.97 118
10   ssd 0.87320  1.00000   894G   322G   571G 36.12 0.83 102
19   ssd 0.87320  1.00000   894G   339G   554G 37.95 0.88 109
12   ssd 0.87320  1.00000   894G   360G   534G 40.26 0.93 112
13   ssd 0.87320  1.00000   894G   404G   489G 45.21 1.04 120
20   ssd 0.87320  1.00000   894G   342G   551G 38.29 0.88 103
23   ssd 0.87320  1.00000   894G   148G   745G 16.65 0.38  61
17   ssd 0.87320  1.00000   894G   423G   470G 47.34 1.09 117
18   ssd 0.87320  1.00000   894G   403G   490G 45.18 1.04 120
21   ssd 0.87320  1.00000   894G   444G   450G 49.67 1.15 130
                     TOTAL 23490G 10170G 13320G 43.30



Karun Josy

On Tue, Dec 5, 2017 at 4:42 AM, Karun Josy <karunjosy1@xxxxxxxxx <mailto:karunjosy1@xxxxxxxxx>> wrote:

    Thank you for detailed explanation!

    Got one another doubt,

    This is the total space available in the cluster :

    TOTAL 23490G
    Use 10170G
    Avail : 13320G


    But ecpool shows max avail as just 3 TB.



without knowing details of your cluster, this is just assumption guessing, but...

perhaps one of your hosts have less free space then the others, replicated can pick 3 of the hosts that have plenty of space, but erasure perhaps require more hosts, so the host with least space is the limiting factor.

check
ceph osd df tree

to see how it looks.


kinds regards
Ronny Aasen

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux