Dell - Internal Use - Confidential Hello Ceph Experts :) , I am using ceph ( ceph version 0.56.6) on Suse linux. I created a simple cluster with one monitor server and two OSDs . The conf file is attached When start my cluster - and do "ceph -s" - I see following message $ceph -s" health HEALTH_WARN 202 pgs stuck inactive; 202 pgs stuck unclean monmap e1: 1 mons at {slesceph1=160.110.73.200:6789/0}, election epoch 1, quorum 0 slesceph1 osdmap e56: 2 osds: 2 up, 2 in pgmap v100: 202 pgs: 202 creating; 0 bytes data, 10305 MB used, 71574 MB / 81880 MB avail mdsmap e1: 0/0/1 up Basically there is some problem with my placement groups - they are forever stuck in "creating" state and there is no OSD associated with them ( despite having two OSD's that are up and in" ) - when I do a ceph pg stat" I see as follows $ceph pg stat v100: 202 pgs: 202 creating; 0 bytes data, 10305 MB used, 71574 MB / 81880 MB avail if I query any individual pg - then I see it isn't mapped to any OSD $ ceph pg 0.d query pgid currently maps to no osd I tried restaring OSDs and tuning my configuration without any avail Any suggestions ? Yogesh Devi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140801/6586a5e1/attachment.htm> -------------- next part -------------- A non-text attachment was scrubbed... Name: ceph.conf Type: application/octet-stream Size: 516 bytes Desc: ceph.conf URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140801/6586a5e1/attachment.obj>