Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



so the next step is to place the pools on the right rule :

ceph osd pool set db-pool  crush_rule fc-r02-ssd


Le mer. 8 nov. 2023 à 12:04, Denny Fuchs <linuxmail@xxxxxxxx> a écrit :

> hi,
>
> I've forget to write the command, I've used:
>
> =====
> ceph osd crush move fc-r02-ceph-osd-01 root=default
> ceph osd crush move fc-r02-ceph-osd-01 root=default
> ...
> =====
>
> and I've found also this param:
>
> ===========
> root@fc-r02-ceph-osd-01:[~]: ceph osd crush tree --show-shadow
> ID   CLASS  WEIGHT    TYPE NAME
> -39   nvme   1.81938  root default~nvme
> -30   nvme         0      host fc-r02-ceph-osd-01~nvme
> -31   nvme   0.36388      host fc-r02-ceph-osd-02~nvme
>   36   nvme   0.36388          osd.36
> -32   nvme   0.36388      host fc-r02-ceph-osd-03~nvme
>   40   nvme   0.36388          osd.40
> -33   nvme   0.36388      host fc-r02-ceph-osd-04~nvme
>   37   nvme   0.36388          osd.37
> -34   nvme   0.36388      host fc-r02-ceph-osd-05~nvme
>   38   nvme   0.36388          osd.38
> -35   nvme   0.36388      host fc-r02-ceph-osd-06~nvme
>   39   nvme   0.36388          osd.39
> -38   nvme         0  root ssds~nvme
> -37   nvme         0      datacenter fc-ssds~nvme
> -36   nvme         0          rack r02-ssds~nvme
> -29   nvme         0  root sata~nvme
> -28   nvme         0      datacenter fc-sata~nvme
> -27   nvme         0          rack r02-sata~nvme
> -24    ssd         0  root ssds~ssd
> -23    ssd         0      datacenter fc-ssds~ssd
> -21    ssd         0          rack r02-ssds~ssd
> -22    ssd         0  root sata~ssd
> -19    ssd         0      datacenter fc-sata~ssd
> -20    ssd         0          rack r02-sata~ssd
> -14                0  root sata
> -18                0      datacenter fc-sata
> -16                0          rack r02-sata
> -13                0  root ssds
> -17                0      datacenter fc-ssds
> -15                0          rack r02-ssds
>   -4    ssd  22.17122  root default~ssd
>   -7    ssd   4.00145      host fc-r02-ceph-osd-01~ssd
>    0    ssd   0.45470          osd.0
>    1    ssd   0.45470          osd.1
>    2    ssd   0.45470          osd.2
>    3    ssd   0.45470          osd.3
>    4    ssd   0.45470          osd.4
>    5    ssd   0.45470          osd.5
>   41    ssd   0.36388          osd.41
>   42    ssd   0.45470          osd.42
>   48    ssd   0.45470          osd.48
>   -3    ssd   3.61948      host fc-r02-ceph-osd-02~ssd
>    6    ssd   0.45470          osd.6
>    7    ssd   0.45470          osd.7
>    8    ssd   0.45470          osd.8
>    9    ssd   0.45470          osd.9
>   10    ssd   0.43660          osd.10
>   29    ssd   0.45470          osd.29
>   43    ssd   0.45470          osd.43
>   49    ssd   0.45470          osd.49
>   -8    ssd   3.63757      host fc-r02-ceph-osd-03~ssd
>   11    ssd   0.45470          osd.11
>   12    ssd   0.45470          osd.12
>   13    ssd   0.45470          osd.13
>   14    ssd   0.45470          osd.14
>   15    ssd   0.45470          osd.15
>   16    ssd   0.45470          osd.16
>   44    ssd   0.45470          osd.44
>   50    ssd   0.45470          osd.50
> -10    ssd   3.63757      host fc-r02-ceph-osd-04~ssd
>   30    ssd   0.45470          osd.30
>   31    ssd   0.45470          osd.31
>   32    ssd   0.45470          osd.32
>   33    ssd   0.45470          osd.33
>   34    ssd   0.45470          osd.34
>   35    ssd   0.45470          osd.35
>   45    ssd   0.45470          osd.45
>   51    ssd   0.45470          osd.51
> -12    ssd   3.63757      host fc-r02-ceph-osd-05~ssd
>   17    ssd   0.45470          osd.17
>   18    ssd   0.45470          osd.18
>   19    ssd   0.45470          osd.19
>   20    ssd   0.45470          osd.20
>   21    ssd   0.45470          osd.21
>   22    ssd   0.45470          osd.22
>   46    ssd   0.45470          osd.46
>   52    ssd   0.45470          osd.52
> -26    ssd   3.63757      host fc-r02-ceph-osd-06~ssd
>   23    ssd   0.45470          osd.23
>   24    ssd   0.45470          osd.24
>   25    ssd   0.45470          osd.25
>   26    ssd   0.45470          osd.26
>   27    ssd   0.45470          osd.27
>   28    ssd   0.45470          osd.28
>   47    ssd   0.45470          osd.47
>   53    ssd   0.45470          osd.53
>   -1         23.99060  root default
>   -6          4.00145      host fc-r02-ceph-osd-01
>    0    ssd   0.45470          osd.0
>    1    ssd   0.45470          osd.1
>    2    ssd   0.45470          osd.2
>    3    ssd   0.45470          osd.3
>    4    ssd   0.45470          osd.4
>    5    ssd   0.45470          osd.5
>   41    ssd   0.36388          osd.41
>   42    ssd   0.45470          osd.42
>   48    ssd   0.45470          osd.48
>   -2          3.98335      host fc-r02-ceph-osd-02
>   36   nvme   0.36388          osd.36
>    6    ssd   0.45470          osd.6
>    7    ssd   0.45470          osd.7
>    8    ssd   0.45470          osd.8
>    9    ssd   0.45470          osd.9
>   10    ssd   0.43660          osd.10
>   29    ssd   0.45470          osd.29
>   43    ssd   0.45470          osd.43
>   49    ssd   0.45470          osd.49
>   -5          4.00145      host fc-r02-ceph-osd-03
>   40   nvme   0.36388          osd.40
>   11    ssd   0.45470          osd.11
>   12    ssd   0.45470          osd.12
>   13    ssd   0.45470          osd.13
>   14    ssd   0.45470          osd.14
>   15    ssd   0.45470          osd.15
>   16    ssd   0.45470          osd.16
>   44    ssd   0.45470          osd.44
>   50    ssd   0.45470          osd.50
>   -9          4.00145      host fc-r02-ceph-osd-04
>   37   nvme   0.36388          osd.37
>   30    ssd   0.45470          osd.30
>   31    ssd   0.45470          osd.31
>   32    ssd   0.45470          osd.32
>   33    ssd   0.45470          osd.33
>   34    ssd   0.45470          osd.34
>   35    ssd   0.45470          osd.35
>   45    ssd   0.45470          osd.45
>   51    ssd   0.45470          osd.51
> -11          4.00145      host fc-r02-ceph-osd-05
>   38   nvme   0.36388          osd.38
>   17    ssd   0.45470          osd.17
>   18    ssd   0.45470          osd.18
>   19    ssd   0.45470          osd.19
>   20    ssd   0.45470          osd.20
>   21    ssd   0.45470          osd.21
>   22    ssd   0.45470          osd.22
>   46    ssd   0.45470          osd.46
>   52    ssd   0.45470          osd.52
> -25          4.00145      host fc-r02-ceph-osd-06
>   39   nvme   0.36388          osd.39
>   23    ssd   0.45470          osd.23
>   24    ssd   0.45470          osd.24
>   25    ssd   0.45470          osd.25
>   26    ssd   0.45470          osd.26
>   27    ssd   0.45470          osd.27
>   28    ssd   0.45470          osd.28
>   47    ssd   0.45470          osd.47
>   53    ssd   0.45470          osd.53
> =====================
>
> cu denny
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux