Degraded objects on brand new file system?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've just created a brand-new filesystem using current unstable
branch.

ceph -w shows me this after I start it up and it settles down,:

2010-11-19 13:07:39.279045    pg v247: 3432 pgs: 3432 active; 54 KB data, 98200 KB used, 3032 GB / 3032 GB avail; 95/108 degraded (87.963%)
2010-11-19 13:07:39.532174    pg v248: 3432 pgs: 3432 active; 54 KB data, 98232 KB used, 3032 GB / 3032 GB avail; 95/108 degraded (87.963%)
2010-11-19 13:07:41.123789    pg v249: 3432 pgs: 3432 active; 54 KB data, 98416 KB used, 3032 GB / 3032 GB avail; 95/108 degraded (87.963%)

That output seems to come from PGMap::print_summary().
If so, it seems to be telling me I have 108 objects, 
of which 95 are degraded.

If so, why would I have any degraded objects on
a brand-new file system?  All my osds are up/in;
shouldn't any degraded objects have been recovered?

Note that I haven't even mounted it anywhere yet.

Also, the above result is after starting
each of my 13 osds one at a time, waiting for
the PGs for each osd to go active before
starting up the next osd.

If I start up all the cosds a newly created file system 
roughly simultaneously, using pdsh, I get 7/108 objects
degraded.

What am I missing?

How can I learn what objects are degraded?

Thanks -- Jim



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux