Erronous stats output (ceph df) after increasing PG number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote:
> Hi all,
> 
> We recently added many OSDs to our production cluster.
> This brought us to a point where the number of PGs we had assigned to our 
> main (heavily used) pool was well below the recommended value.
> 
> We increased the PG number (incrementally to avoid huge degradation ratios) 
> to the recommended optimal value.
> 
> Once the procedure ended we noticed that the output of "ceph df" ( POOLS: ) 
> does not represent the actual state.

How did it mismatch reality?

> Has anyone noticed this before and if so is there a fix?

There is some ambiguity in the stats after PG split that gets cleaned up 
on the next scrub.  I wouldn't expect it to be noticeable, though ...

sage


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux