Re: Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Konstantin,

the situation after moving the PGs with osdmaptool is not really better than without:

$ ceph osd df class hdd 
[…]
MIN/MAX VAR: 0.86/1.08  STDDEV: 2.04

The OSD with the fewest PGs has 66 of them, the one with the most has 83.

Is this the expected result? I'm unsure how much unusable space this means but I'm sure there is a relevant amount of it.

Thanks for all your patience and support
Lars



Tue, 17 Dec 2019 07:45:24 +0100
Lars Täuber <taeuber@xxxxxxx> ==> Konstantin Shalygin <k0ste@xxxxxxxx> :
> Hi Konstantin,
> 
> the cluster has finished it's backfilling.
> I got this situation:
> 
> $ ceph osd df class hdd
> […]
> MIN/MAX VAR: 0.86/1.08  STDDEV: 2.05
> 
> Now I created a new upmap.sh and sourced it. The cluster is busy again with 3% of its objects.
> I'll report the result.
> 
> Thanks for all your hints.
> 
> Regards,
> Lars
> 
> 
> 
> Mon, 16 Dec 2019 15:38:30 +0700
> Konstantin Shalygin <k0ste@xxxxxxxx> ==> Lars Täuber <taeuber@xxxxxxx> :
> > On 12/16/19 3:25 PM, Lars Täuber wrote:  
> > > Here it comes.    
> > 
> > Maybe some bug in osdmaptool, when defined pools is less than one no 
> > actually do_upmap is executed.
> > 
> > Try like this:
> > 
> > `osdmaptool osdmap.om --upmap upmap.sh --upmap-pool=cephfs_data 
> > --upmap-pool=cephfs_metadata --upmap-deviation=0 --upmap-max=1000`
> > 
> > In my env actually upmaps purposed only for pool_id 1.
> > 
> > CC'ed David.
> > 
> > 
> > 
> > k
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux