Re: OSD node type/count mixes in the cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet.

Best
Mehmet

Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu <dnaidu@xxxxxxxxxx>:

Thanks David for sharing your experience, appreciate it.

 

--

Deepak

 

From: David Turner [mailto:drakonstein@xxxxxxxxx]
Sent: Friday, June 09, 2017 5:38 AM
To: Deepak Naidu; ceph-users@xxxxxxxxxxxxxx
Subject: Re: OSD node type/count mixes in the cluster

 

I ran a cluster with 2 generations of the same vendor hardware. 24 osd supermicro and 32 osd supermicro (with faster/more RAM and CPU cores).  The cluster itself ran decently well, but the load differences was drastic between the 2 types of nodes. It required me to run the cluster with 2 separate config files for each type of node and was an utter PITA when troubleshooting bottlenecks.

Ultimately I moved around hardware and created a legacy cluster on the old hardware and created a new cluster using the newer configuration.  In general it was very hard to diagnose certain bottlenecks due to everything just looking so different.  The primary one I encountered was snap trimming due to deleted thousands of snapshots/day.

If you aren't pushing any limits of Ceph, you will probably be fine.  But if you have a really large cluster, use a lot of snapshots, or are pushing your cluster harder than the average user... Then I'd avoid mixing server configurations in a cluster.

 

On Fri, Jun 9, 2017, 1:36 AM Deepak Naidu <dnaidu@xxxxxxxxxx> wrote:

Wanted to check if anyone has a ceph cluster which has mixed vendor servers both with same disk size i.e. 8TB but different count i.e. Example 10 OSD servers from Dell with 60 Disk per server and other 10 OSD servers from HP with 26 Disk per server.

If so does that change any performance dynamics ? or is it not advisable .

--
Deepak
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux