What is your pool size? 304 pgs sound awfuly small for 20 OSDs. More pgs will help distribute full pgs better. But with a full or near full OSD in hand, increasing pgs is a no-no operation. If you search in the list archive, I believe there was a thread last month or so which provided a walkthrough-sort of for dealing with uneven distribution and a full OSD. -K. On 03/24/2016 01:54 PM, Jacek Jarosiewicz wrote: > disk usage on the full osd is as below. What are the *_TEMP directories > for? How can I make sure which pg directories are safe to remove? > > [root@cf04 current]# du -hs * > 156G 0.14_head > 156G 0.21_head > 155G 0.32_head > 157G 0.3a_head > 155G 0.e_head > 156G 0.f_head > 40K 10.2_head > 4.0K 11.3_head > 218G 14.13_head > 218G 14.15_head > 218G 14.1b_head > 219G 14.1f_head > 9.1G 14.29_head > 219G 14.2a_head > 75G 14.2d_head > 125G 14.2e_head > 113G 14.32_head > 163G 14.33_head > 218G 14.35_head > 151G 14.39_head > 218G 14.3b_head > 103G 14.3d_head > 217G 14.3f_head > 219G 14.a_head > 773M 17.0_head > 814M 17.10_head > 4.0K 17.10_TEMP > 747M 17.19_head > 4.0K 17.19_TEMP > 669M 17.1b_head > 659M 17.1c_head > 638M 17.1f_head > 681M 17.30_head > 4.0K 17.30_TEMP > 721M 17.34_head > 695M 17.3d_head > 726M 17.3e_head > 734M 17.3f_head > 4.0K 17.3f_TEMP > 670M 17.d_head > 597M 17.e_head > 4.0K 17.e_TEMP > 4.0K 1.7_head > 34M 5.1_head > 34M 5.6_head > 4.0K 9.6_head > 4.0K commit_op_seq > 30M meta > 0 nosnap > 614M omap > > > > On 03/24/2016 10:11 AM, Jacek Jarosiewicz wrote: >> Hi! >> >> I have a problem with the osds getting full on our cluster. >> > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com