Re: Ceph stuck at: objects misplaced (0.064%)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 9. Juli 2020 08:32:32 MESZ schrieb Eugen Block <eblock@xxxxxx>:
>Do you have pg_autoscaler enabled or the balancer module?
>

AFAIK luminous does not Support pg_autoscaler

My guess, you dont have enough space for ceph to create 3rd Copy of your pgs on node2

Or did it solved by hisself?

Hth
Mehmet 

>
>Zitat von Ml Ml <mliebherr99@xxxxxxxxxxxxxx>:
>
>> Hello,
>>
>> ceph is stuck since 4 days with 0.064% misplaced and i dunno why. Can
>> anyone help me to get it fixed?
>> I did restart some OSDs and reweight them again to get some data
>> moving but that did not help.
>>
>> root@node01:~ # ceph -s
>> cluster:
>> id: 251c937e-0b55-48c1-8f34-96e84e4023d4
>> health: HEALTH_WARN
>> 1803/2799972 objects misplaced (0.064%)
>> mon node02 is low on available space
>>
>> services:
>> mon: 3 daemons, quorum node01,node02,node03
>> mgr: node03(active), standbys: node01, node02
>> osd: 16 osds: 16 up, 16 in; 1 remapped pgs
>>
>> data:
>> pools: 1 pools, 512 pgs
>> objects: 933.32k objects, 2.68TiB
>> usage: 9.54TiB used, 5.34TiB / 14.9TiB avail
>> pgs: 1803/2799972 objects misplaced (0.064%)
>> 511 active+clean
>> 1 active+clean+remapped
>>
>> io:
>> client: 131KiB/s rd, 8.57MiB/s wr, 28op/s rd, 847op/s wr
>>
>> root@node01:~ # ceph health detail
>> HEALTH_WARN 1803/2800179 objects misplaced (0.064%); mon node02 is
>low
>> on available space
>> OBJECT_MISPLACED 1803/2800179 objects misplaced (0.064%)
>> MON_DISK_LOW mon node02 is low on available space
>> mon.node02 has 28% avail
>> root@node01:~ # ceph versions
>> {
>> "mon": {
>> "ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd)
>> luminous (stable)": 3
>> },
>> "mgr": {
>> "ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd)
>> luminous (stable)": 3
>> },
>> "osd": {
>> "ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd)
>> luminous (stable)": 16
>> },
>> "mds": {},
>> "overall": {
>> "ceph version 12.2.13 (98af9a6b9a46b2d562a0de4b09263d70aeb1c9dd)
>> luminous (stable)": 22
>> }
>> }
>>
>> root@node02:~ # df -h
>> Filesystem Size Used Avail Use% Mounted on
>> udev 63G 0 63G 0% /dev
>> tmpfs 13G 1.3G 12G 11% /run
>> /dev/sda3 46G 31G 14G 70% /
>> tmpfs 63G 57M 63G 1% /dev/shm
>> tmpfs 5.0M 0 5.0M 0% /run/lock
>> tmpfs 63G 0 63G 0% /sys/fs/cgroup
>> /dev/sda1 922M 206M 653M 24% /boot
>> /dev/fuse 30M 144K 30M 1% /etc/pve
>> /dev/sde1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-11
>> /dev/sdf1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-14
>> /dev/sdc1 889G 676G 214G 77% /var/lib/ceph/osd/ceph-3
>> /dev/sdb1 889G 667G 222G 76% /var/lib/ceph/osd/ceph-2
>> /dev/sdd1 93M 5.4M 88M 6% /var/lib/ceph/osd/ceph-7
>> tmpfs 13G 0 13G 0% /run/user/0
>>
>> root@node02:~ # ceph osd tree
>> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
>> -1 14.34781 root default
>> -2 4.25287 host node01
>> 0 hdd 0.85999 osd.0 up 0.80005 1.00000
>> 1 hdd 0.86749 osd.1 up 0.85004 1.00000
>> 6 hdd 0.87270 osd.6 up 0.90002 1.00000
>> 12 hdd 0.78000 osd.12 up 0.95001 1.00000
>> 13 hdd 0.87270 osd.13 up 0.95001 1.00000
>> -3 3.91808 host node02
>> 2 hdd 0.70000 osd.2 up 0.80005 1.00000
>> 3 hdd 0.59999 osd.3 up 0.85004 1.00000
>> 7 hdd 0.87270 osd.7 up 0.85004 1.00000
>> 11 hdd 0.87270 osd.11 up 0.75006 1.00000
>> 14 hdd 0.87270 osd.14 up 0.85004 1.00000
>> -4 6.17686 host node03
>> 4 hdd 0.87000 osd.4 up 1.00000 1.00000
>> 5 hdd 0.87000 osd.5 up 1.00000 1.00000
>> 8 hdd 0.87270 osd.8 up 1.00000 1.00000
>> 10 hdd 0.87270 osd.10 up 1.00000 1.00000
>> 15 hdd 0.87270 osd.15 up 1.00000 1.00000
>> 16 hdd 1.81879 osd.16 up 1.00000 1.00000
>>
>> root@node01:~ # ceph osd df tree
>> ID CLASS WEIGHT   REWEIGHT SIZE    USE     DATA    OMAP    META
>> AVAIL   %USE  VAR  PGS TYPE NAME
>> -1       14.55780        - 14.9TiB 9.45TiB 7.46TiB 1.47GiB 23.2GiB
>> 5.43TiB 63.52 1.00   - root default
>> -2        4.27286        - 4.35TiB 3.15TiB 2.41TiB  486MiB 7.62GiB
>> 1.21TiB 72.32 1.14   -     host node01
>>  0   hdd  0.85999  0.80005  888GiB  619GiB  269GiB 92.3MiB      0B
>> 269GiB 69.72 1.10  89         osd.0
>>  1   hdd  0.86749  0.85004  888GiB  641GiB  248GiB  109MiB      0B
>> 248GiB 72.12 1.14  92         osd.1
>>  6   hdd  0.87270  0.90002  894GiB  634GiB  632GiB 98.9MiB 2.65GiB
>> 259GiB 70.99 1.12 107         osd.6
>> 12   hdd  0.79999  0.95001  894GiB  664GiB  661GiB 94.4MiB 2.52GiB
>> 230GiB 74.31 1.17 112         osd.12
>> 13   hdd  0.87270  0.95001  894GiB  665GiB  663GiB 91.7MiB 2.46GiB
>> 229GiB 74.43 1.17 112         osd.13
>> -3        4.10808        - 4.35TiB 3.17TiB 2.18TiB  479MiB 6.99GiB
>> 1.18TiB 72.86 1.15   -     host node02
>>  2   hdd  0.78999  0.75006  888GiB  654GiB  235GiB 95.6MiB      0B
>> 235GiB 73.57 1.16  94         osd.2
>>  3   hdd  0.70000  0.80005  888GiB  737GiB  151GiB  114MiB      0B
>> 151GiB 82.98 1.31 105         osd.3
>>  7   hdd  0.87270  0.85004  894GiB  612GiB  610GiB 88.9MiB 2.43GiB
>> 281GiB 68.50 1.08 103         osd.7
>> 11   hdd  0.87270  0.75006  894GiB  576GiB  574GiB 81.8MiB 2.19GiB
>> 317GiB 64.47 1.01  97         osd.11
>> 14   hdd  0.87270  0.85004  894GiB  669GiB  666GiB 98.8MiB 2.37GiB
>> 225GiB 74.85 1.18 112         osd.14
>> -4        6.17686        - 6.17TiB 3.13TiB 2.86TiB  541MiB 8.58GiB
>> 3.04TiB 50.73 0.80   -     host node03
>>  4   hdd  0.87000  1.00000  888GiB  504GiB  384GiB  124MiB      0B
>> 384GiB 56.72 0.89  72         osd.4
>>  5   hdd  0.87000  1.00000  888GiB  520GiB  368GiB 96.2MiB      0B
>> 368GiB 58.57 0.92  75         osd.5
>>  8   hdd  0.87270  1.00000  894GiB  508GiB  505GiB 80.2MiB 2.07GiB
>> 386GiB 56.80 0.89  85         osd.8
>> 10   hdd  0.87270  1.00000  894GiB  374GiB  373GiB 51.9MiB 1.73GiB
>> 519GiB 41.88 0.66  63         osd.10
>> 15   hdd  0.87270  1.00000  894GiB  504GiB  502GiB 60.1MiB 1.99GiB
>> 390GiB 56.37 0.89  84         osd.15
>> 16   hdd  1.81879  1.00000 1.82TiB  797GiB  794GiB  129MiB 2.79GiB
>> 1.04TiB 42.77 0.67 134         osd.16
>>                      TOTAL 14.9TiB 9.45TiB 7.46TiB 1.47GiB 23.2GiB  
>> 5.43TiB 63.52
>> MIN/MAX VAR: 0.66/1.31  STDDEV: 11.59
>>
>> root@node02:~ # df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> udev             63G     0   63G   0% /dev
>> tmpfs            13G  1.4G   12G  11% /run
>> /dev/sda3        46G   31G   14G  70% /
>> tmpfs            63G   63M   63G   1% /dev/shm
>> tmpfs           5.0M     0  5.0M   0% /run/lock
>> tmpfs            63G     0   63G   0% /sys/fs/cgroup
>> /dev/sda1       922M  206M  653M  24% /boot
>> /dev/fuse        30M  140K   30M   1% /etc/pve
>> /dev/sde1        93M  5.4M   88M   6% /var/lib/ceph/osd/ceph-11
>> /dev/sdf1        93M  5.4M   88M   6% /var/lib/ceph/osd/ceph-14
>> /dev/sdc1       889G  738G  152G  83% /var/lib/ceph/osd/ceph-3
>> /dev/sdb1       889G  654G  235G  74% /var/lib/ceph/osd/ceph-2
>> /dev/sdd1        93M  5.4M   88M   6% /var/lib/ceph/osd/ceph-7
>> tmpfs            13G     0   13G   0% /run/user/0
>>
>>
>>
>> Any idea?
>>
>> Thanks,
>> Michael
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>_______________________________________________
>ceph-users mailing list -- ceph-users@xxxxxxx
>To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux