NVME node disks maxed out during rebalance after adding to existing cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have slow heartbeat in front and back with the extra node added to the cluster and this occasionally causing slow ops and failed osd reports.

I'm extending our cluster with +3 relatively differently configured servers compared to the original 12.
Our cluster (latest octopus) is an objectstore cluster with 12x identical node (8x15.3 TB SSD inside with 4osd on each, 512GB mem, 96 vcore cpu ...) and it hosts a 4:2 EC data pool.
The +3 nodes - currently done only 1 and now the 2nd is in progress but have the issue during rebalance - have 8x 15.3TB NVME drives with 4x osd on each.

The NVME drive specification:https://i.ibb.co/BVmLKnf/currentnvme.png
[https://i.ibb.co/BVmLKnf/currentnvme.png]


The old server SSD spec: https://i.ibb.co/dkD3VKx/oldssd.png
[https://i.ibb.co/dkD3VKx/oldssd.png]

Iostat on new nvme: https://i.ibb.co/PF0hrVW/iostat.png
[https://i.ibb.co/PF0hrVW/iostat.png]

Rebalance is ongoing with the slowest option like max_backfill/op priority/max recovery = 1
But it generates a very huge iowait, seems like the disk not fast enough (but why in the previous node didn't have this issue)?
Here is the metrics about the disk which is running the backfill/rebalance now (FYI we have 3.2B objects in the cluster):
https://i.ibb.co/LNXCRbj/disks.png
[https://i.ibb.co/LNXCRbj/disks.png]

Wonder what I'm missing or how this can happen?

Here you can see gigantic latencies, failed osds and slow ops:
https://i.ibb.co/Jn0sj9g/laten.png
[https://i.ibb.co/Jn0sj9g/laten.png]
Thank you for your help

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux