Re: Weird issues related to (large/small) weights in mixed nvme/hdd pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter,

Relooking at your problem, you might want to keep track of this issue: http://tracker.ceph.com/issues/22440

Regards,
Tom

On Wed, Jan 31, 2018 at 11:37 AM, Thomas Bennett <thomas@xxxxxxxxx> wrote:
Hi Peter,

From your reply, I see that:
  1. pg 3.12c is part of pool 3. 
  2. The osd's in the "up"  for pg 3.12c  are: 6, 0, 12.

I suggest to check on this 'activating' issue do the following:
  1. What is the rule that pool 3 should follow, 'hybrid', 'nvme' or 'hdd'? (Use the ceph osd pool ls detail command and look at pool 3's crush rule)
  2. Then check are osds 6, 0, 12 backed by nvme's or hdd's? (Use ceph osd tree | grep nvme command to find your nvme backed osds.)

If your problem is similar to mine, you will have osds that are nvme backed in a pool that should only be backed by hdds, which was causing a pg to go into 'activating' state and staying there.

Cheers,
Tom

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux