Re: Reduced data availability: 4 pgs inactive, 4 pgs incomplete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Brent,

Brent Kennedy wrote to the mailing list:
Unfortunately, I don?t see that setting documented anywhere other than the release notes. Its hard to find guidance for questions in that case, but luckily you noted it in your blog post. I wish I knew what setting to put that at. I did use the deprecated one after moving to hammer a while back due to the mis-calcuated PGs. I have now that settings, but used 0 as the value, which cleared the error in the status, but the stuck incomplete pgs persist.

per your earlier message, you currently have at max 2549 PGs per OSD ("too many PGs per OSD (2549 > max 200)"). Therefore, you might try setting mon_max_pg_per_osd to 2600 (to give some room for minor growth during backfills) and restart the OSDs.

Of course, reducing the number of PGs per OSD should somehow be on your list, but I do understand that that's not always as easy as it's written... especially given the fact that Ceph seems to still lack a few mechanisms to clean up certain situations (like lossless migration of pool contents to another pool, for RBD or CephFS).

Regards,
Jens

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux