Ceph Not getting into a clean state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you soo much! That seems to work immidetately.

ATM I still see 3 pgs in active+clean+scrubbing state - but that will 
hopefully fix by time.

So the way to go with firefly, is to either use at least 3 hosts for 
OSDs - or reduce the number of replicas?

Kind Regards,
Georg


On 09.05.2014 10:59, Martin B Nielsen wrote:
> Hi,
>
> I experienced exactly the same with 14.04 and the 0.79 release.
>
> It was a fresh clean install with default crushmap and ceph-deploy
> install as pr. the quick-start guide.
>
> Oddly enough changing replica size (incl min_size) from 3 - 2 (and 2->1)
> and back again it worked.
>
> I didn't have time to look into replicating the issue.
>
> Cheers,
> Martin
>
>
> On Thu, May 8, 2014 at 4:30 PM, Georg H?llrigl
> <georg.hoellrigl at xidras.com <mailto:georg.hoellrigl at xidras.com>> wrote:
>
>     Hello,
>
>     We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By
>     now I've tried this multiple times - but the result keeps the same
>     and shows me lots of troubles (the cluster is empty, no client has
>     accessed it)
>
>     #ceph -s
>          cluster b04fc583-9e71-48b7-a741-__92f4dff4cfef
>           health HEALTH_WARN 470 pgs stale; 470 pgs stuck stale; 18 pgs
>     stuck unclean; 26 requests are blocked > 32 sec
>           monmap e2: 3 mons at
>     {ceph-m-01=10.0.0.100:6789/0,__ceph-m-02=10.0.1.101:6789/0,__ceph-m-03=10.0.1.102:6789/0
>     <http://10.0.0.100:6789/0,ceph-m-02=10.0.1.101:6789/0,ceph-m-03=10.0.1.102:6789/0>},
>     election epoch 8, quorum 0,1,2 ceph-m-01,ceph-m-02,ceph-m-03
>           osdmap e409: 9 osds: 9 up, 9 in
>            pgmap v1231: 480 pgs, 9 pools, 822 bytes data, 43 objects
>                  9373 MB used, 78317 GB / 78326 GB avail
>                       451 stale+active+clean
>                         1 stale+active+clean+scrubbing
>                        10 active+clean
>                        18 stale+active+remapped
>
>     Anyone an idea what happens here? Should an empty cluster not show
>     only active+clean pgs?
>
>
>     Regards,
>     Georg
>     _________________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux