Re: Nautilus - PG count decreasing after adding OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

that sounds like the pg_autoscaler is doing its work. Check with:

ceph osd pool autoscale-status

I don't think ceph is eating itself or that you're losing data. ;-)


Zitat von Dave Hall <kdhall@xxxxxxxxxxxxxx>:

Hello,

About 3 weeks ago I added a node and increased the number of OSDs in my
cluster from 24 to 32, and then marked one old OSD down because it was
frequently crashing.  .

After adding the new OSDs the PG count jumped fairly dramatically, but ever
since, amidst a continuous low level of rebalancing, the number of PGs has
gradually decreased to less by 25% from it's max value.  Although I don't
have specific notes, my perception is that the current number of PGs is
actually lower than it was before I added OSDs.

So what's going on here?  It is possible to imagine that my cluster is
slowly eating itself, and that I'm about to lose 200TB of data. It's also
possible to imagine that this is all due to the gradual optimization of the
pools.

Note that the primary pool is an EC 8 + 2 containing about 124TB.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux