Re: OSD space imbalance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are three factors that impact disk utilization of an OSD:
 1. number of PGs on the OSD (determined by CRUSH)
 2. number of objects with each PG (better to pick a 2 power PG number to make this one more even)
 3. object size deviation

with 'ceph osd reweight-by-pg', you can tune (1). And if you would like to get a better understanding of what is the root cause in your cluster, you can find more information from 'pg dump', from where you can get the raw data for 1 and 2.

Once the cluster is filled, you properly go with 'ceph osd reweight-by-utilization', be careful of that since it could incur lots of data movement...

----------------------------------------
> To: ceph-users@xxxxxxxxxxxxxx
> From: vedran.furac@xxxxxxxxx
> Date: Fri, 14 Aug 2015 00:15:17 +0200
> Subject: Re:  OSD space imbalance
>
> On 13.08.2015 18:01, GuangYang wrote:
>> Try 'ceph osd  <int>' right after creating the pools?
>
> Would it do any good now when pool is in use and nearly full as I can't
> re-create it now. Also, what's the integer argument in the command
> above? I failed to find proper explanation in the docs.
Please check it out here - https://github.com/ceph/ceph/blob/master/src/mon/OSDMonitor.cc#L469
>
>> What is the typical object size in the cluster?
>
> Around 50 MB.
>
>
> Thanks,
> Vedran
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 		 	   		  
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux