Re: Infinite degraded objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From which version of ceph to which other version of ceph did you
upgrade? Can you provide logs from crashing OSDs? The degraded object
percentage being larger than 100% has been reported before
(https://www.spinics.net/lists/ceph-users/msg39519.html) and looks
like it's been fixed a week or so ago:
http://tracker.ceph.com/issues/21803

On Mon, Oct 23, 2017 at 5:10 AM, Gonzalo Aguilar Delgado
<gaguilar@xxxxxxxxxxxxxxxxxx> wrote:
> Hello,
>
> Since we upgraded ceph cluster we are facing a lot of problems. Most of them
> due to osd crashing. What can cause this?
>
>
> This morning I woke up with thi message:
>
>
> root@red-compute:~# ceph -w
>     cluster 9028f4da-0d77-462b-be9b-dbdf7fa57771
>      health HEALTH_ERR
>             1 pgs are stuck inactive for more than 300 seconds
>             7 pgs inconsistent
>             1 pgs stale
>             1 pgs stuck stale
>             recovery 20266198323167232/287940 objects degraded
> (7038340738753.641%)
>             37154696925806626 scrub errors
>             too many PGs per OSD (305 > max 300)
>      monmap e12: 2 mons at
> {blue-compute=172.16.0.119:6789/0,red-compute=172.16.0.100:6789/0}
>             election epoch 4986, quorum 0,1 red-compute,blue-compute
>       fsmap e913: 1/1/1 up {0=blue-compute=up:active}
>      osdmap e8096: 5 osds: 5 up, 5 in
>             flags require_jewel_osds
>       pgmap v68755349: 764 pgs, 6 pools, 558 GB data, 140 kobjects
>             1119 GB used, 3060 GB / 4179 GB avail
>             20266198323167232/287940 objects degraded (7038340738753.641%)
>                  756 active+clean
>                    7 active+clean+inconsistent
>                    1 stale+active+clean
>   client io 1630 B/s rd, 552 kB/s wr, 0 op/s rd, 64 op/s wr
>
> 2017-10-22 18:10:13.000812 mon.0 [INF] pgmap v68755348: 764 pgs: 7
> active+clean+inconsistent, 756 active+clean, 1 stale+active+clean; 558 GB
> data, 1119 GB used, 3060 GB / 4179 GB avail; 1641 B/s rd, 229 kB/s wr, 39
> op/s; 20266198323167232/287940 objects degraded (7038340738753.641%)
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux