Re: Data distribution question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The upmap balancer in v12.2.12 works really well... Perfectly uniform on our clusters.

.. Dan


On Tue, 30 Apr 2019, 19:22 Kenneth Van Alstyne, <kvanalstyne@xxxxxxxxxxxxxxx> wrote:
Unfortunately it looks like he’s still on Luminous, but if upgrading is an option, the options are indeed significantly better.  If I recall correctly, at least the balancer module is available in Luminous.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com 
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

On Apr 30, 2019, at 12:15 PM, Jack <ceph@xxxxxxxxxxxxxx> wrote:

Hi,

I see that you are using rgw
RGW comes with many pools, yet most of them are used for metadata and
configuration, those do not store many data
Such pools do not need more than a couple PG, each (I use pg_num = 8)

You need to allocate your pg on pool that actually stores the data

Please do the following, to let us know more:
Print the pg_num per pool:
for i in $(rados lspools); do echo -n "$i: "; ceph osd pool get $i
pg_num; done

Print the usage per pool:
ceph df

Also, instead of doing a "ceph osd reweight-by-utilization", check out
the balancer plugin : http://docs.ceph.com/docs/mimic/mgr/balancer/

Finally, in nautilus, the pg can now upscale and downscale automaticaly
See https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/


On 04/30/2019 06:34 PM, Shain Miley wrote:
Hi,

We have a cluster with 235 osd's running version 12.2.11 with a
combination of 4 and 6 TB drives.  The data distribution across osd's
varies from 52% to 94%.

I have been trying to figure out how to get this a bit more balanced as
we are running into 'backfillfull' issues on a regular basis.

I've tried adding more pgs...but this did not seem to do much in terms
of the imbalance.

Here is the end output from 'ceph osd df':

MIN/MAX VAR: 0.73/1.31  STDDEV: 7.73

We have 8199 pgs total with 6775 of them in the pool that has 97% of the
data.

The other pools are not really used (data, metadata, .rgw.root,
.rgw.control, etc).  I have thought about deleting those unused pools so
that most if not all the pgs are being used by the pool with the
majority of the data.

However...before I do that...there anything else I can do or try in
order to see if I can balance out the data more uniformly?

Thanks in advance,

Shain


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux