Re: Is it safe to add different OS but same ceph version to the existing cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



I've added ubuntu 20.04 nodes to my ceph octopus 15.2.17 baremetal deployed cluster next to the centos 8 nodes and I see something interesting regarding the disk usage, it is higher on ubuntu than on centos, however the cpu usage is lower (on this picture you can see 4 nodes, each column is 1 node, the last column is the ubuntu):

Could this be because of the missing HPC tuned profile on ubuntu 20.04?
On ubuntu 20.04 there isn't any HPC tuned profile, I've used the latency-performance one which is the base of the HPC:

This is the latency performance:

summary=Optimize for deterministic performance at the cost of increased power consumption

The hpc tuned profile has these additional values on centos and on ubuntu 22.04:
summary=Optimize for HPC compute workloads
description=Configures virtual memory, CPU governors, and network settings for HPC compute workloads.

If someone is very god with these kernel parameter values, do you see something that might be related to the high disk utilization?

Thank you

From: Milind Changire <mchangir@xxxxxxxxxx>
Sent: Monday, August 7, 2023 11:38 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxx>
Subject: Re:  Is it safe to add different OS but same ceph version to the existing cluster?

Email received from the internet. If in doubt, don't click any link nor open any attachment !

On Mon, Aug 7, 2023 at 8:23 AM Szabo, Istvan (Agoda)
<Istvan.Szabo@xxxxxxxxx> wrote:
> Hi,
> I have an octopus cluster on the latest octopus version with mgr/mon/rgw/osds on centos 8.
> Is it safe to add an ubuntu osd host with the same octopus version?
> Thank you

Well, the ceph source bits surely remain the same. The binary bits
could be different due to better compiler support on the newer OS
So assuming the new ceph is deployed on the same hardware platform
things should be stable.
Also, assuming that relevant OS tunables and ceph features and config
options have been configured to match the older deployment, the new
ceph deployment should just work fine and as expected.
Saying all this, I'd still recommend to test out the move one node at
a time rather than executing a bulk move.
Making a list of types of devices and checking driver support on the
new OS would also be a prudent thing to do.

This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux