Uneven Node utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Cephers,

I have a small 6 node cluster with 36 OSDs.  When running the benchmark/torture tests I noticed that some nodes, usually storage2n6-la and also sometimes others are utilized much more.  I see some osds are used 100% and load average goes up to 21 while on the others the load average is 5 - 6 and osds are within 40 - 50 - 60% of utilization.  I cannot use upmap mode for balancer because I still have some client machines using hammer.  I wonder if my issue is caused by compat balancing mode as compat weight shows nodes with the same number and disk size but different compat weights.  If so what can I do to improve the load/disk usage distribution in the cluster?   Also, my legacy client machines only need to access cephfs on the new cluster, so I wonder if keeping hammer as the oldest client version makes sense and I should change it to jewel and set crush tunables to optimal.

Help is greatly appreciated,


ceph df
RAW STORAGE:.
    CLASS     SIZE       AVAIL      USED        RAW USED     %RAW USED
    ssd       94 TiB     88 TiB     5.9 TiB      5.9 TiB          6.29
    TOTAL     94 TiB     88 TiB     5.9 TiB      5.9 TiB          6.29
 
POOLS:
    POOL                ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    cephfs_data          1     1.6 TiB       3.77M     4.9 TiB      5.57        28 TiB
    cephfs_metadata      2     3.9 GiB     367.34k     4.3 GiB         0        28 TiB
    one                  5     344 GiB      90.94k     1.0 TiB      1.20        28 TiB 
 ceph -s
  cluster:
    id:     9b4468b7-5bf2-4964-8aec-4b2f4bee87ad
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum storage2n1-la,storage2n2-la,storage2n3-la (age 39h)
    mgr: storage2n1-la(active, since 39h), standbys: storage2n2-la, storage2n3-la
    mds: cephfs:1 {0=storage2n4-la=up:active} 1 up:standby-replay 1 up:standby
    osd: 36 osds: 36 up (since 37h), 36 in (since 10w)
 
  data:
    pools:   3 pools, 1664 pgs
    objects: 4.23M objects, 1.9 TiB
    usage:   5.9 TiB used, 88 TiB / 94 TiB avail
    pgs:     1664 active+clean
 
  io:
    client:   1.2 KiB/s rd, 46 KiB/s wr, 5 op/s rd, 2 op/s wr

Ceph osd df looks like this
 
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META     AVAIL   %USE VAR  PGS STATUS
 6   ssd 1.74609  1.00000 1.7 TiB 115 GiB 114 GiB 186 MiB  838 MiB 1.6 TiB 6.45 1.02  92     up
12   ssd 1.74609  1.00000 1.7 TiB 122 GiB 121 GiB  90 MiB  934 MiB 1.6 TiB 6.81 1.08  92     up
18   ssd 1.74609  1.00000 1.7 TiB 112 GiB 111 GiB 107 MiB  917 MiB 1.6 TiB 6.24 0.99  91     up
24   ssd 3.49219  1.00000 3.5 TiB 233 GiB 232 GiB 206 MiB  818 MiB 3.3 TiB 6.53 1.04 185     up
30   ssd 3.49219  1.00000 3.5 TiB 224 GiB 223 GiB 246 MiB  778 MiB 3.3 TiB 6.25 0.99 187     up
35   ssd 3.49219  1.00000 3.5 TiB 216 GiB 215 GiB 252 MiB  772 MiB 3.3 TiB 6.04 0.96 184     up
 5   ssd 1.74609  1.00000 1.7 TiB 112 GiB 111 GiB  88 MiB  936 MiB 1.6 TiB 6.28 1.00  92     up
11   ssd 1.74609  1.00000 1.7 TiB 112 GiB 111 GiB 112 MiB  912 MiB 1.6 TiB 6.26 0.99  92     up
17   ssd 1.74609  1.00000 1.7 TiB 112 GiB 111 GiB 274 MiB  750 MiB 1.6 TiB 6.25 0.99  94     up
23   ssd 3.49219  1.00000 3.5 TiB 234 GiB 233 GiB 192 MiB  832 MiB 3.3 TiB 6.54 1.04 183     up
29   ssd 3.49219  1.00000 3.5 TiB 216 GiB 215 GiB 356 MiB  668 MiB 3.3 TiB 6.03 0.96 184     up
34   ssd 3.49219  1.00000 3.5 TiB 227 GiB 226 GiB 267 MiB  757 MiB 3.3 TiB 6.34 1.01 184     up
 4   ssd 1.74609  1.00000 1.7 TiB 125 GiB 124 GiB  16 MiB 1008 MiB 1.6 TiB 7.00 1.11  94     up
10   ssd 1.74609  1.00000 1.7 TiB 108 GiB 107 GiB 163 MiB  861 MiB 1.6 TiB 6.01 0.96  93     up
16   ssd 1.74609  1.00000 1.7 TiB 107 GiB 106 GiB 163 MiB  861 MiB 1.6 TiB 6.00 0.95  94     up
22   ssd 3.49219  1.00000 3.5 TiB 221 GiB 220 GiB 385 MiB  700 MiB 3.3 TiB 6.18 0.98 187     up
28   ssd 3.49219  1.00000 3.5 TiB 223 GiB 222 GiB 257 MiB  767 MiB 3.3 TiB 6.23 0.99 186     up
33   ssd 3.49219  1.00000 3.5 TiB 241 GiB 240 GiB 233 MiB  791 MiB 3.3 TiB 6.74 1.07 185     up
 1   ssd 1.74609  1.00000 1.7 TiB 103 GiB 102 GiB 240 MiB  784 MiB 1.6 TiB 5.76 0.92  93     up
 7   ssd 1.74609  1.00000 1.7 TiB 117 GiB 116 GiB  70 MiB  954 MiB 1.6 TiB 6.56 1.04  91     up
13   ssd 1.74609  1.00000 1.7 TiB 126 GiB 125 GiB  76 MiB  948 MiB 1.6 TiB 7.03 1.12  95     up
19   ssd 3.49219  1.00000 3.5 TiB 230 GiB 229 GiB 307 MiB  717 MiB 3.3 TiB 6.44 1.02 186     up
25   ssd 3.49219  1.00000 3.5 TiB 220 GiB 219 GiB 309 MiB  715 MiB 3.3 TiB 6.15 0.98 185     up
31   ssd 3.49219  1.00000 3.5 TiB 223 GiB 222 GiB 205 MiB  819 MiB 3.3 TiB 6.23 0.99 186     up
 0   ssd 1.74609  1.00000 1.7 TiB 116 GiB 115 GiB 151 MiB  873 MiB 1.6 TiB 6.49 1.03  93     up
 3   ssd 1.74609  1.00000 1.7 TiB 121 GiB 120 GiB  89 MiB  935 MiB 1.6 TiB 6.77 1.08  91     up
 9   ssd 1.74609  1.00000 1.7 TiB 104 GiB 103 GiB 183 MiB  841 MiB 1.6 TiB 5.81 0.92  93     up
15   ssd 3.49219  1.00000 3.5 TiB 222 GiB 221 GiB 205 MiB  819 MiB 3.3 TiB 6.20 0.98 185     up
21   ssd 3.49219  1.00000 3.5 TiB 213 GiB 212 GiB 312 MiB  712 MiB 3.3 TiB 5.95 0.95 182     up
27   ssd 3.49219  1.00000 3.5 TiB 221 GiB 220 GiB 219 MiB  805 MiB 3.3 TiB 6.17 0.98 185     up
 2   ssd 1.74609  1.00000 1.7 TiB 104 GiB 103 GiB 116 MiB  908 MiB 1.6 TiB 5.80 0.92  92     up
 8   ssd 1.74609  1.00000 1.7 TiB 111 GiB 110 GiB 118 MiB  906 MiB 1.6 TiB 6.21 0.99  91     up
14   ssd 1.74609  1.00000 1.7 TiB 106 GiB 105 GiB 192 MiB  832 MiB 1.6 TiB 5.95 0.94  92     up
20   ssd 3.49219  1.00000 3.5 TiB 226 GiB 225 GiB 196 MiB  828 MiB 3.3 TiB 6.31 1.00 185     up
26   ssd 3.49219  1.00000 3.5 TiB 232 GiB 231 GiB 231 MiB  793 MiB 3.3 TiB 6.47 1.03 184     up
32   ssd 3.49219  1.00000 3.5 TiB 226 GiB 225 GiB 229 MiB  795 MiB 3.3 TiB 6.31 1.00 184     up
                    TOTAL  94 TiB 5.9 TiB 5.9 TiB 6.9 GiB   29 GiB  88 TiB 6.29                
MIN/MAX VAR: 0.92/1.12  STDDEV: 0.31

ceph osd crush tree looks like this:

ID  CLASS WEIGHT   (compat) TYPE NAME              
 -1       94.28906          root default          
 -9       15.71484 16.19038     host storage2n1-la
  6   ssd  1.74609  1.77313         osd.6          
 12   ssd  1.74609  1.82532         osd.12        
 18   ssd  1.74609  2.10315         osd.18        
 24   ssd  3.49219  3.50087         osd.24        
 30   ssd  3.49219  3.01933         osd.30        
 35   ssd  3.49219  3.96858         osd.35        
-11       15.71484 15.75711     host storage2n2-la
  5   ssd  1.74609  1.84412         osd.5          
 11   ssd  1.74609  1.71651         osd.11        
 17   ssd  1.74609  1.76128         osd.17        
 23   ssd  3.49219  3.73497         osd.23        
 29   ssd  3.49219  3.27397         osd.29        
 34   ssd  3.49219  3.42627         osd.34        
 -7       15.71484 14.19093     host storage2n3-la
  4   ssd  1.74609  1.66724         osd.4          
 10   ssd  1.74609  1.60271         osd.10        
 16   ssd  1.74609  1.39088         osd.16        
 22   ssd  3.49219  3.11852         osd.22        
 28   ssd  3.49219  3.04280         osd.28        
 33   ssd  3.49219  3.36879         osd.33        
 -5       15.71484 15.87343     host storage2n4-la
  1   ssd  1.74609  1.92644         osd.1          
  7   ssd  1.74609  2.12386         osd.7          
 13   ssd  1.74609  1.42424         osd.13        
 19   ssd  3.49219  3.52307         osd.19        
 25   ssd  3.49219  3.55241         osd.25        
 31   ssd  3.49219  3.32341         osd.31        
 -3       15.71484 16.08948     host storage2n5-la
  0   ssd  1.74609  1.97093         osd.0          
  3   ssd  1.74609  1.87062         osd.3          
  9   ssd  1.74609  1.57335         osd.9          
 15   ssd  3.49219  3.82397         osd.15        
 21   ssd  3.49219  3.59575         osd.21        
 27   ssd  3.49219  3.25485         osd.27        
-13       15.71484 16.18745     host storage2n6-la
  2   ssd  1.74609  2.18393         osd.2          
  8   ssd  1.74609  1.69547         osd.8          
 14   ssd  1.74609  1.95445         osd.14        
 20   ssd  3.49219  3.49811         osd.20        
 26   ssd  3.49219  3.42702         osd.26        
 32   ssd  3.49219  3.42848         osd.32

ceph balancer status
{
    "last_optimize_duration": "0:00:00.903346",
    "plans": [],
    "mode": "crush-compat",
    "active": true,
    "optimize_result": "Unable to find further optimization, change balancer mode and retry might help",
    "last_optimize_started": "Thu Jan 16 05:52:53 2020"
}
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux