Re: Suggestions/experiences with mixed disk sizes and models from 4TB - 14TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've run my home cluster with drives ranging in size from 500GB to 8TB before and the biggest issue you run into is that the bigger drives will get a proportional more number of PGs which will increase the memory requirements on them.  Typically you want around 100 PGs/OSD, but if you mix 4TB and 14TB drives in a cluster the 14TB drives will have 3.5 times the number of PGs.  So if the 4TB drives have 100 PGs, the 14TB drives will have 350.   Or if the 14TB drives have 100 PGs, the 4TB drives will only have just 28 PGs on them.  Using the balancer plugin in the mgr will pretty much be required.

 

Also since you're using EC you'll need to make sure the math works with these nodes receiving 2-3.5 times the data.

 

Bryan

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx>
Date: Wednesday, January 16, 2019 at 2:33 AM
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: [ceph-users] Suggestions/experiences with mixed disk sizes and models from 4TB - 14TB

 

Dear Ceph users,

 

I’d like to get some feedback for the following thought:

 

Currently I run some 24*4TB bluestore OSD nodes. The main focus is on storage space over IOPS.

 

We use erasure code and cephfs, and things look good right now.

 

The „but“ is, I do need more disk space and don’t have so much more rack space available, so I was thinking of adding some 8TB or even 12TB OSDs and/or exchange over time 4TB OSDs with bigger disks.

 

My question is: How are your experiences with the current >=8TB SATA disks are some very bad models out there which I should avoid?

 

The current OSD nodes are connected by 4*10Gb bonds, so for replication/recovery speed is a 24 Chassis with bigger disks useful, or should I go with smaller chassis? Or dose the chassi sice does not matter at all that much in my setup.

 

I know, EC is quit computing intense, so may be bigger disks hav also there an impact?

 

Lot’s of questions, may be you can help answering some.

 

                Best regards and Thanks a lot for feedback . Götz

 

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux