Too few PGs per OSD (autoscaler)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



	Helo, Ceph users,

TL;DR: PG autoscaler should not cause the "too few PGs per OSD" warning

Detailed:
Some time ago, I upgraded the HW in my virtualization+Ceph cluster,
replacing 30+ old servers with <10 modern servers. I immediately got
"Too much PGs per OSD" warning, so I had to add more OSDs, even though
I did not need the space at that time. So I eagerly waited for the PG
autoscaling feature in Nautilus.

Yesterday I upgraded to Nautilus and enabled the autoscaler on my RBD pool.
Firstly I got the "objects per pg (XX) is more than XX times cluster average"
warning for several hours, which has been replaced with
"too few PGs per OSD" later on.

I have to set the minimum number of PGs per pool, but anyway, I think
autoscaler should not be too aggresive, and should not reduce the number
of PGs below the PGs per OSD limit.

(that said, the ability to reduce the number of PGs in a pool in Nautilus
works well for me, thanks for it!)

	Thanks,

-Yenya

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
sir_clive> I hope you don't mind if I steal some of your ideas?
 laryross> As far as stealing... we call it sharing here.   --from rcgroups
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux