Re: Octopus auto-scale causing HEALTH_WARN re object numbers [EXT]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/03/2021 16:38, Matthew Vernon wrote:

root@sto-t1-1:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average; 9 pgs not deep-scrubbed in time [WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average     pool default.rgw.buckets.data objects per pg (313153) is more than 23.4063 times cluster average (13379)

...which seems like the wrong thing for the auto-scaler to be doing. Is this a known problem?

The autoscaler has finished, and I still have the health warning:

root@sto-t1-1:~# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average pool default.rgw.buckets.data objects per pg (313153) is more than 23.0871 times cluster average (13564)

Am I right that the auto-scaler only considers size and never object count.

If so, am I right that this is a bug?

I mean, I think I can bodge around it with pg_num_min, but I thought one of the merits of Octopus was that the admin had to spend less time worrying about pool sizes...

Regards,

Matthew


--
The Wellcome Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux