Re: PG damaged "failed_repair"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Sorry for the broken formatting. Here are the outputs again.

ceph osd df:

ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA      OMAP     META     AVAIL    %USE   VAR   PGS  STATUS
 3    hdd  1.81879         0      0 B      0 B       0 B      0 B      0 B      0 B      0     0    0    down
12    hdd  1.81879   1.00000  1.8 TiB  385 GiB   383 GiB  6.7 MiB  1.4 GiB  1.4 TiB  20.66  1.73   18      up
13    hdd  1.81879   1.00000  1.8 TiB  422 GiB   421 GiB  5.8 MiB  1.3 GiB  1.4 TiB  22.67  1.90   17      up
15    hdd  1.81879   1.00000  1.8 TiB  264 GiB   263 GiB  4.6 MiB  1.1 GiB  1.6 TiB  14.17  1.19   14      up
16    hdd  9.09520   1.00000  9.1 TiB  1.0 TiB  1023 GiB  8.8 MiB  2.6 GiB  8.1 TiB  11.01  0.92   65      up
17    hdd  1.81879   1.00000  1.8 TiB  319 GiB   318 GiB  6.1 MiB  1.0 GiB  1.5 TiB  17.13  1.43   15      up
 1    hdd  5.45749   1.00000  5.5 TiB  546 GiB   544 GiB  7.8 MiB  1.4 GiB  4.9 TiB   9.76  0.82   29      up
 4    hdd  5.45749   1.00000  5.5 TiB  801 GiB   799 GiB  8.3 MiB  2.4 GiB  4.7 TiB  14.34  1.20   44      up
 8    hdd  5.45749   1.00000  5.5 TiB  708 GiB   706 GiB  9.7 MiB  2.1 GiB  4.8 TiB  12.67  1.06   36      up
11    hdd  5.45749         0      0 B      0 B       0 B      0 B      0 B      0 B      0     0    0    down
14    hdd  1.81879   1.00000  1.8 TiB  200 GiB   198 GiB  3.8 MiB  1.3 GiB  1.6 TiB  10.71  0.90   10      up
 0    hdd  9.09520         0      0 B      0 B       0 B      0 B      0 B      0 B      0     0    0    down
 5    hdd  9.09520   1.00000  9.1 TiB  859 GiB   857 GiB   17 MiB  2.1 GiB  8.3 TiB   9.23  0.77   46      up
 9    hdd  9.09520   1.00000  9.1 TiB  924 GiB   922 GiB   11 MiB  2.3 GiB  8.2 TiB   9.92  0.83   55      up
                       TOTAL   53 TiB  6.3 TiB   6.3 TiB   90 MiB   19 GiB   46 TiB  11.95                   
MIN/MAX VAR: 0.77/1.90  STDDEV: 4.74

ceph osd pool ls detail :

pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 32 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 9327 lfor 0/0/104 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 3 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 9018 lfor 0/0/104 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 4 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 9149 lfor 0/0/106 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 5 'polyphoto_backup' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 372 lfor 0/0/362 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm snappy compression_mode aggressive application rbd

The error seems to come from a software error in Ceph. I see this error in the logs : "FAILED ceph_assert(clone_overlap.count(clone))"

Thanks,
Romain Lebbadi-Breteau
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux