Hi Peter,
Relooking at your problem, you might want to keep track of this issue: http://tracker.ceph.com/issues/22440
Regards,
Tom
On Wed, Jan 31, 2018 at 11:37 AM, Thomas Bennett <thomas@xxxxxxxxx> wrote:
Hi Peter,From your reply, I see that:
- pg 3.12c is part of pool 3.
- The osd's in the "up" for pg 3.12c are: 6, 0, 12.
I suggest to check on this 'activating' issue do the following:
- What is the rule that pool 3 should follow, 'hybrid', 'nvme' or 'hdd'? (Use the ceph osd pool ls detail command and look at pool 3's crush rule)
- Then check are osds 6, 0, 12 backed by nvme's or hdd's? (Use ceph osd tree | grep nvme command to find your nvme backed osds.)
If your problem is similar to mine, you will have osds that are nvme backed in a pool that should only be backed by hdds, which was causing a pg to go into 'activating' state and staying there.Cheers,
Tom
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com