Re: 0.53.6 -- pgs stuck incomplete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/14/2013 02:43 PM, Gregory Farnum wrote:
> The problem it's complaining about here isn't lost data, but an
> insufficient number of mapped OSDs to the PGs. As it says, look up
> incomplete in the docs. :)

I have, it says "report bug to inktank":
http://ceph.com/docs/master/dev/placement-group/?highlight=incomplete

# ceph -s
   health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs
stuck unclean
   monmap e1: 1 mons at {a=144.92.167.231:6789/0}, election epoch 1,
quorum 0 a
   osdmap e149: 4 osds: 4 up, 4 in
    pgmap v71913: 576 pgs: 572 active+clean, 4 incomplete; 1405 MB data,
11593 MB used, 3469 GB / 3667 GB avail
   mdsmap e614: 2/2/2 up {0=a=up:active,1=b=up:active}

All 4 osds are up, "reducing pool metadata min_size from 4" is exactly
what I don't want, and yes -- I deleted the data from CephFS (my other
e-mail) so at this point whatever's stuck has zero useful (to me)
information in it.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux