Re: Data distribution question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shain:
Have you looked into doing a "ceph osd reweight-by-utilization” by chance?  I’ve found that data distribution is rarely perfect and on aging clusters, I always have to do this periodically.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com 
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

On Apr 30, 2019, at 11:34 AM, Shain Miley <smiley@xxxxxxx> wrote:

Hi,

We have a cluster with 235 osd's running version 12.2.11 with a combination of 4 and 6 TB drives.  The data distribution across osd's varies from 52% to 94%.

I have been trying to figure out how to get this a bit more balanced as we are running into 'backfillfull' issues on a regular basis.

I've tried adding more pgs...but this did not seem to do much in terms of the imbalance.

Here is the end output from 'ceph osd df':

MIN/MAX VAR: 0.73/1.31  STDDEV: 7.73

We have 8199 pgs total with 6775 of them in the pool that has 97% of the data.

The other pools are not really used (data, metadata, .rgw.root, .rgw.control, etc).  I have thought about deleting those unused pools so that most if not all the pgs are being used by the pool with the majority of the data.

However...before I do that...there anything else I can do or try in order to see if I can balance out the data more uniformly?

Thanks in advance,

Shain

--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux