Hi, I experienced exactly the same with 14.04 and the 0.79 release. It was a fresh clean install with default crushmap and ceph-deploy install as pr. the quick-start guide. Oddly enough changing replica size (incl min_size) from 3 - 2 (and 2->1) and back again it worked. I didn't have time to look into replicating the issue. Cheers, Martin On Thu, May 8, 2014 at 4:30 PM, Georg H?llrigl <georg.hoellrigl at xidras.com>wrote: > Hello, > > We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now > I've tried this multiple times - but the result keeps the same and shows me > lots of troubles (the cluster is empty, no client has accessed it) > > #ceph -s > cluster b04fc583-9e71-48b7-a741-92f4dff4cfef > health HEALTH_WARN 470 pgs stale; 470 pgs stuck stale; 18 pgs stuck > unclean; 26 requests are blocked > 32 sec > monmap e2: 3 mons at {ceph-m-01=10.0.0.100:6789/0, > ceph-m-02=10.0.1.101:6789/0,ceph-m-03=10.0.1.102:6789/0}, election epoch > 8, quorum 0,1,2 ceph-m-01,ceph-m-02,ceph-m-03 > osdmap e409: 9 osds: 9 up, 9 in > pgmap v1231: 480 pgs, 9 pools, 822 bytes data, 43 objects > 9373 MB used, 78317 GB / 78326 GB avail > 451 stale+active+clean > 1 stale+active+clean+scrubbing > 10 active+clean > 18 stale+active+remapped > > Anyone an idea what happens here? Should an empty cluster not show only > active+clean pgs? > > > Regards, > Georg > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140509/70b3eef1/attachment.htm>