I imagine you aren't actually using the data/metadata pool that these PGs are in, but it's a previously-reported bug we haven't identified: http://tracker.ceph.com/issues/8758 They should go away if you restart the OSDs that host them (or just remove those pools), but it's not going to hurt anything as long as you aren't using them. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Sep 25, 2014 at 3:37 AM, Pavel V. Kaygorodov <pasha at inasan.ru> wrote: > Hi! > > 16 pgs in our ceph cluster are in active+clean+replay state more then one day. > All clients are working fine. > Is this ok? > > root at bastet-mon1:/# ceph -w > cluster fffeafa2-a664-48a7-979a-517e3ffa0da1 > health HEALTH_OK > monmap e3: 3 mons at {1=10.92.8.80:6789/0,2=10.92.8.81:6789/0,3=10.92.8.82:6789/0}, election epoch 2570, quorum 0,1,2 1,2,3 > osdmap e3108: 16 osds: 16 up, 16 in > pgmap v1419232: 8704 pgs, 6 pools, 513 GB data, 125 kobjects > 2066 GB used, 10879 GB / 12945 GB avail > 8688 active+clean > 16 active+clean+replay > client io 3237 kB/s wr, 68 op/s > > > root at bastet-mon1:/# ceph pg dump | grep replay > dumped all in format plain > 0.fd 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:29.902766 0'0 3108:2628 [0,7,14,8] [0,7,14,8] 0 0'0 2014-09-23 02:23:49.463704 0'0 2014-09-23 02:23:49.463704 > 0.e8 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:21.945082 0'0 3108:1823 [2,7,9,10] [2,7,9,10] 2 0'0 2014-09-22 14:37:32.910787 0'0 2014-09-22 14:37:32.910787 > 0.aa 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:29.326607 0'0 3108:2451 [0,7,15,12] [0,7,15,12] 0 0'0 2014-09-23 00:39:10.717363 0'0 2014-09-23 00:39:10.717363 > 0.9c 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:29.325229 0'0 3108:1917 [0,7,9,12] [0,7,9,12] 0 0'0 2014-09-22 14:40:06.694479 0'0 2014-09-22 14:40:06.694479 > 0.9a 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:29.325074 0'0 3108:2486 [0,7,14,11] [0,7,14,11] 0 0'0 2014-09-23 01:14:55.825900 0'0 2014-09-23 01:14:55.825900 > 0.91 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:28.839148 0'0 3108:1962 [0,7,9,10] [0,7,9,10] 0 0'0 2014-09-22 14:37:44.652796 0'0 2014-09-22 14:37:44.652796 > 0.8c 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:28.838683 0'0 3108:2635 [0,2,9,11] [0,2,9,11] 0 0'0 2014-09-23 01:52:52.390529 0'0 2014-09-23 01:52:52.390529 > 0.8b 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:21.215964 0'0 3108:1636 [2,0,8,14] [2,0,8,14] 2 0'0 2014-09-23 01:31:38.134466 0'0 2014-09-23 01:31:38.134466 > 0.50 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:35.869160 0'0 3108:1801 [7,2,15,10] [7,2,15,10] 7 0'0 2014-09-20 08:38:53.963779 0'0 2014-09-13 10:27:26.977929 > 0.44 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:35.871409 0'0 3108:1819 [7,2,15,10] [7,2,15,10] 7 0'0 2014-09-20 11:59:05.208164 0'0 2014-09-20 11:59:05.208164 > 0.39 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:28.653190 0'0 3108:1827 [0,2,9,10] [0,2,9,10] 0 0'0 2014-09-22 14:40:50.697850 0'0 2014-09-22 14:40:50.697850 > 0.32 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:10.970515 0'0 3108:1719 [2,0,14,9] [2,0,14,9] 2 0'0 2014-09-20 12:06:23.716480 0'0 2014-09-20 12:06:23.716480 > 0.2c 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:28.647268 0'0 3108:2540 [0,7,12,8] [0,7,12,8] 0 0'0 2014-09-22 23:44:53.387815 0'0 2014-09-22 23:44:53.387815 > 0.1f 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:28.651059 0'0 3108:2522 [0,2,14,11] [0,2,14,11] 0 0'0 2014-09-22 23:38:16.315755 0'0 2014-09-22 23:38:16.315755 > 0.7 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:35.848797 0'0 3108:1739 [7,0,12,10] [7,0,12,10] 7 0'0 2014-09-22 14:43:38.224718 0'0 2014-09-22 14:43:38.224718 > 0.3 0 0 0 0 0 0 0 active+clean+replay 2014-09-24 02:38:08.885066 0'0 3108:1640 [2,0,11,15] [2,0,11,15] 2 0'0 2014-09-20 06:18:55.987318 0'0 2014-09-20 06:18:55.987318 > > With best regards, > Pavel. > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com